Q & A on Ocean Predictoor Traction Stats

https://blog.oceanprotocol.com/q-a-on-ocean-predictoor-traction-stats-a9de0db8bc6d?source=rss----9d6d3078be3d---4

Ocean Predictoor has promising stats. What do they mean?

Introduction

In Ocean Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $.

Ocean Protocol recently tweeted some promising numbers on Predictoor traction. This led to Q’s from the community. This blog post answers those Q’s.

This post is organized as follows:

  • Recent tweets on Predictoor traction
  • Background: focus of Ocean Predictoor / DF team
  • Reference: key sources of data
  • The Q & A itself
  • Conclusion

Recent Tweets on Predictoor Traction

Ocean Protocol had three recent tweets with promising traction statistics. Here they are.

The first tweet highlights data from DappRadar’s new “Ocean Predictoor” page.

Ocean Protocol on Twitter: “✨ 1.66 *million* ✨That’s the # txs for Ocean Predictoor, in just 30 days. 🧨https://t.co/rGqEvseGP4It only launched 3 mos ago.🙀🤖This AI in action. 🤖 AI-powered prediction & trading bots on crypto price feeds. All earning $.Try for yourself! https://t.co/5EbOFZAQSq https://t.co/dB62m7O5xw pic.twitter.com/1E6rteBPvk / Twitter”

✨ 1.66 *million* ✨That’s the # txs for Ocean Predictoor, in just 30 days. 🧨https://t.co/rGqEvseGP4It only launched 3 mos ago.🙀🤖This AI in action. 🤖 AI-powered prediction & trading bots on crypto price feeds. All earning $.Try for yourself! https://t.co/5EbOFZAQSq https://t.co/dB62m7O5xw pic.twitter.com/1E6rteBPvk

The second tweet shows data from AutoBot Ocean’s new “Predictoor” page:

Ocean Protocol on Twitter: “Gm! 👋8624.18 OCEAN ($4300)That’s the $ earned in one month, by one address submitting predictions to 1 Ocean Predictoor feed. (The top earner.)Multiply by 20 for an idea of potential earnings across 20 feeds.This past month, all predictoors earned a total of 144,200… pic.twitter.com/eHannMnACH / Twitter”

Gm! 👋8624.18 OCEAN ($4300)That’s the $ earned in one month, by one address submitting predictions to 1 Ocean Predictoor feed. (The top earner.)Multiply by 20 for an idea of potential earnings across 20 feeds.This past month, all predictoors earned a total of 144,200… pic.twitter.com/eHannMnACH

The final tweet highlights data from Autobot Ocean “Volumes” page.

From these tweets, the community had excellent questions. We will answer them below. But first, a couple backgrounders to provide context.

Background: Focus of Ocean Predictoor / DF Team

The main goal of the Ocean / Predictoor / DF team right now is: make it easy & obvious to earn $ as a trader, using Predictoor.

  • Once it happens, there will be strong organic demand for the feeds, which in turn drives sales, which in turn drives predictoor income
  • This also means we’re spending less time on other potential things: detailed blogs, AMAs, webapp, analytics. Focus focus.
  • In short, $ success for traders will drive success everywhere else.

Reference: Key Sources of Data

I refer to these below. You can also use them yourself.

Ocean docs help too:

Questions & Answers

How is a transaction counted?

We submitted a list of smart contract addresses to dappradar.com. Whenever one of those has a tx against it, it’s counted.

The addresses we submitted are: the Ocean datatoken contracts for each of the 20 prediction feeds, plus a couple more.

I believe we still have some more addresses to add: sales volume is showing up as zero, which is obviously wrong. We’re working to fix it; it may be that we need to add the address of the FixedRateExchange contract for each feed

Are the Ocean predictoor ‘farming’ rewards part of the transactions if so how much do they account for?

Payout of Predictoor Data Farming rewards is a small fraction of txs. However the $ incentive is likely a key driver for near-term traction.

If you’re a predictoor submitting prediction transactions, you can make $ via:

  1. Organic sales
  2. Sales from Predictoor Data Farming rewards
  3. Others’ stake slashed when they are wrong.

Of these:

  • (3) evens out to about zero if your accuracy is about the same as others, and your stake doesn’t exceed expected sales
  • (2) is currently the biggest amount. We expect this in the early days, as it’s kickstarting the usage. (How to see: Compare “Autobot: Volumes” for all $ sales, to DF $ budget in Predictoor DF docs)
  • (1) will ultimately be the biggest amount by far, once it becomes easy & obvious to earn $ as a trader.

“AutoBot: Volumes” page shows volumes for each feed.

How many unique addresses are paying for feeds?

We haven’t computed analytics for that yet. You *could* query the subgraph for yourself, see link above.

Are there return users?

For predictoors: Yes. In fact, it seems to be a high % predictoors returning, ie low churn. See

For traders: We haven’t computed analytics for that yet. What will really make a difference is traders easily & obviously earning $; hence our focus on that.

Is the overall number of users increasing?

Yes. See “DappRadar : Predictoor” page, the “UAW” line on the plot. UAW = Unique Active Wallets.

> How much revenue are these transactions giving to the Ocean Protocol?

Fees details: (from Predictoor docs)
– 0.1% community swap fee
– 20% to Ocean Protocol Foundation. (Will be used to further drive Predictoor, and to burn OCEAN.)

What numbers would need to be met before another say 100 price feeds are added to predictoor (1, 15, 30 min feeds), more token pairs including Ocean (helping increase Ocean volume) etc.

As mentioned above, we see that the biggest driver of traction will be traders making $. Our focus is to make that easy & obvious. As part of our work, we are exploring other trading venues & pairs. We’re also professionalizing our internal data engineering pipeline, and starting to build an analytics app inspired by my previous work on Computer-Aided Design. All of this is already open-source; you can track our progress in GitHub.

Once “making $” is established, then we will scale up accordingly. Way more defi feeds; then beyond to other verticals taking us to 10K+ feeds; AI DAOs; then building large-scale AIs across those 10K+ feeds. One thing will lead to another.

Summary

Some recent Ocean tweets have shared some promising traction numbers for Ocean Predictoor, via data on DappRadar and Autobot Ocean. That’s great news! It’s the sign of a growing ecosystem of predictoors and traders.

To really take this to the next level, the Ocean Predictoor / DF team is working to make it easy & obvious to earn $ as a trader, using Predictoor. Success there will drive success & growth everywhere else.

2024 will be a big year for Ocean:)

To get started predicting or trading, go to predictoor.ai now.

About Ocean Protocol

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. In Ocean Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $.

Follow Ocean on Twitter or Telegram to keep up to date. Chat directly with the Ocean community on Discord. Or, track Ocean progress directly on GitHub.


Q & A on Ocean Predictoor Traction Stats was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

Introducing Predictoor Data Farming

https://blog.oceanprotocol.com/introducing-predictoor-data-farming-ad4c95f4a9aa?source=rss----9d6d3078be3d---4

Run AI-powered prediction bots and earn. 37,000 OCEAN weekly ongoing + 100,000 ROSE weekly for first 4 weeks

Contents
1. Introduction
2. Predictoor DF Timing
3. Predictoor DF Reward Amounts
4. How to Earn $ Via Predictoor DF
5. How to Earn More $ Via Passive DF & Volume DF
6. Conclusion
Appendix: Updates to Challenge DF

Summary

Predictoor Data Farming is a new Ocean incentives program that amplifies predictoors’ earnings, via 37K OCEAN + 100K ROSE (≈$21K USD) in weekly extra sales to Predictoor data feeds [1]. This higher baseline sales also makes Volume DF and Passive DF more attractive.

1. Introduction

Ocean Predictoor data feeds predict whether BTC, ETH etc will rise or fall 5min or 1h into the future. Such feeds are alpha to traders. These feeds are crowdsourced by “predictoors”: people running AI-powered prediction bots. Predictoors earn from sales of the feeds (to traders) and from stake reshuffling ($ going from incorrect predictoors to correct ones). Predictoor’s longer-term vision is 10,000 feeds, going beyond finance to weather, agriculture, and more.

Predictoor is a product built by Ocean Protocol core team. Predictoor smart contracts run on Oasis Sapphire, the only privacy-preserving EVM chain in production. The Ocean and Oasis core teams collaborate closely ❤️.

Data Farming (DF) is Ocean’s incentive program. It rewards OCEAN to participants who lock OCEAN into veOCEAN, curate data, or make predictions — all in the name of driving data consume volume (DCV). DF is organized into two streams: Passive DF and Active DF. Passive DF allows for passive earning potential. Active DF requires more engagement; it has several substreams.

Predictoor Data Farming is a new substream of Active Data Farming. It amplifies existing predictoors’ earnings based on their accuracy and stake. It does so by basically contributing extra sales to Ocean Predictoor data feeds. Let’s dive in!

2. Predictoor DF Timing

Predictoor DF starts counting on Nov 9, 2023, at the beginning of Data Farming Round 62 (DF62). It runs indefinitely.

3. Predictoor DF Reward Amounts

Predictoor DF has two components: OCEAN rewards and Oasis ROSE rewards. Here are details:

  1. OCEAN payouts of 37,000 OCEAN/week, ongoing.
     — A special “DF buyer” bot purchases Predictoor feeds. It starts operating on Nov 9. Every day, it spends 1/7 of the weekly Predictoor OCEAN budget for another 24h subscription. It spends an equal amount per feed. (Currently there are 20 feeds: 10 x 5min, 10 x 1h.)
     — The OCEAN comes from the Ocean DF budget, as part of the 75,000 OCEAN/week for Active DF. The Volume DF budget has been adjusted to 37,000 OCEAN/week, and Challenge DF to 1,000 OCEAN/week. Here are details.
  2. ROSE payouts of 100,000 ROSE/week for the first 4 weeks.
     — Payout is at the end of the DF round. Therefore there will be payouts at the end of DF62, DF63, DF64, and DF65. Payout for a given predictoor is pro-rata to the net earnings of that predictoor over that DF round, specifically (total sales $ to the predictoor) minus (predictoor stake slashed due to being wrong).
     — The ROSE comes from a generous contribution of Oasis Protocol Foundation 👪🙏.

For further details about DF more generally, including DF stream budgets, see Data Farming docs and “Ocean Data Farming Main is Here” blog post.

4. How to Earn $ Via Predictoor DF

4.1 How to become a predictoor

Simple, run your own prediction bot! 🤖

4.2 On OCEAN Rewards in Predictoor DF

  • Duration: ongoing
  • To be eligible: predictoors are automatically eligible 🧘
  • To claim: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs.

4.3 On $ROSE rewards in Predictoor DF

  • Duration: Runs 4 DF rounds — DF62, DF63, DF64, DF65. Limited time!
  • ⚠️ To be eligible for a given DF round: you MUST run Predictoor $OCEAN payout script <= 4 days after the round ends, i.e. between Thu 00:00 UTC & Sun 11:59 PM UTC
  • To claim: see instructions in Predictoor DF user guide in Ocean docs

5. How to Earn More $ Via Passive DF & Volume DF

Predictoor DF makes Active DF more attractive, and in turn Passive DF. Let’s zoom in.

First, we review Passive DF and Volume DF (details here):

  • Passive DF has 75K OCEAN weekly rewards budget. To participate, one locks OCEAN for veOCEAN. Rewards to a user are pro-rata to the user’s veOCEAN holdings.
  • Volume DF has 37K OCEAN weekly rewards budget. To participate in Volume DF, one allocates veOCEAN towards data assets. Rewards to a user are higher for more veOCEAN allocated, and for more DCV on the data asset that’s allocated to (DCV = Data Consume Volume).

Predictoor DF makes Volume DF more attractive than status quo, because the volume-based bounds on weekly rewards trends tend to be qualitatively higher for prediction feeds. (See the Appendix for details.)

Curating is straightforward. Ocean assets with high DCV are easy to identify: it’s the 20 OPF-published prediction feeds. This makes the choice of veOCEAN allocation easy: point to those 20 assets.

Doing Volume DF gives Passive DF rewards too, of course.

The net result: Predictoor DF means earnings potential from Predictoor DF, Volume DF, and Passive DF.

6. Conclusion

A “predictoor” runs AI-powered prediction bots in the Ocean Predictoor ecosystem. They onboard here. They earn $ from sales of prediction feeds; higher accuracy and higher stake mean higher $.

Predictoor DF is a new incentives program that amplifies predictoors’ earnings, via extra sales to Ocean Predictoor data feeds. Predictoor DF has 37K OCEAN weekly rewards (ongoing) and 100K ROSE rewards (first 4 weeks).

The higher baseline sales makes Volume DF and Passive more attractive.

Challenge DF rewards are reduced, and will wind down soon; the Appendix has details.

Appendix: Updates to Challenge DF

Challenge DF rewards are reduced, and will wind down soon. Here are details.

Challenge DF was introduced & shipped in July 2023 in DF Round 48 (DF48). It was always conceived as an onboarding mechanism to becoming a predictoor, ie someone who runs a bot predicting ETH and more within the Ocean Predictoor ecosystem. Since Predictoor was introduced, predictoor onboarding has greatly improved. It’s no longer worth doing Challenge DF first. So let’s simplify:)

Therefore, since the launch of Challenge DF, there are two important updates.

  1. Reduced payouts. As of DF62, Challenge DF payouts are reduced from 5000 OCEAN / week to 1000 OCEAN / week. DF62 counting starts on Nov 9, 2023 and concludes on Nov 16, 2023. (Predictoor DF is introduced in DF62.) The Active DF budget is 37K / 37K / 1K OCEAN for Volume / Predictoor / Challenge DF respectively.
  2. Wind-down. DF65 is the final round for Challenge DF. Starting in DF65, the 1000 OCEAN / week will be redistributed equally to Volume DF and Predictoor DF. The Active DF budget will37K / 37K / 1K OCEAN for Volume / Predictoor / Challenge DF respectively.

For further details, see the Ocean Data Farming Series post, which includes “Predictoor DF” posts and other DF posts, all in chronological order.

Appendix: On DCV Bounds of Prediction Feeds

Predictoor DF makes Volume DF more attractive than status quo. Active DF’s rewards are bounded by DCV_bound which in turn is bound by sales volume and fees. In Predictoor DF, both those factors are raised; which raises the bounding in Volume DF; this in turn means higher earning potential. Let’s elaborate.

At one time, Volume DF had a “wash consume” problem, where people published and consumed their own datasets. DF9 onwards address this, by putting a bounds on weekly rewards:

DCV_bound = DCV * m

Where m = DCV_bounding_multiplier = Ocean community fee (0.1%) + publish market fee.

This stopped wash consume because it became unprofitable to do wash consume: fees eat up all potential profits.

A low DCV (data consume volume) or a low m (publish market fee) mean low DCV_bound.

Predictoor DF makes both DCV and m higher!

  • DCV is higher: the 37K OCEAN/week counts as consume volume
  • m is higher: publish market fee is 20% for prediction feeds

Therefore DCV_bound is higher. Specifically: DCV_bound = 37000 * (0.001 + 0.20) = 7437 OCEAN. In other words, at least 7437 OCEAN is available for Volume DF in any given week.

Notes

[1] Source of the $21K USD number:

  • CoinGecko prices on Nov 8, 2023: $0.39 / OCEAN and $0.065 / ROSE
  • $0.39 / OCEAN * 17,000 OCEAN = $14,430 USD
  • $0.065 / ROSE * 100,000 ROSE = $6,500 USD
  • Total: $14,4300 + $6,500 = $20,930 ≈ $21K USD

Follow Ocean Protocol on Twitter, Telegram, or GitHub. And chat directly with the Ocean community on Discord.


Introducing Predictoor Data Farming was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

AI DAOs Series

https://blog.oceanprotocol.com/ai-daos-series2-3876510d6eb4

Articles & Talks about AI DAOs & Humanity

Licence: CC0

This post collects together the key articles I’ve written, and talks that I’ve given, about AI DAOs, and humanity.

Lead-In

Lead-in Article. Blockchains for Artificial Intelligence.

Parts I & II

Part I Article (2016). AI DAOs, and Three Paths to Get There

Part II Article (2016). Wild, Wooly AI DAOs

Part I&II Talk (2017). “An Introduction to AI DAOs”, BigchainDB & IPDB meetup, Berlin, April 5, 2017 [slides][video]

Part III

Part III Main Article (2017). The AI Existential Threat: Reflections of a Recovering Bio-Narcissist

Part III Sub Article (2017). The Bandwidth++ Scenario

Part III Talk (2017). “AI, Blockchain, and Humanity: The Next 10 Billion Years of Human Civilization”, Convoco 3.0 — AI and the Common Good, Berlin, Apr 1, 2017 [slides] [video]

Part III talk (2016). “Reflections of a Recovering Bio-Narcissist on the Singularity”, riseof.ai, Aug 18, 2016, Berlin [slides][video]

Part IV — “Nature 2.0”

Whereas Parts II & III laid out some negative implications of AI DAOs, Part IV is a positive framing. Good to balance things out!

Part IV Article (2018). Nature 2.0: The Cradle of Civilization Gets an Upgrade.

Part IV talk (2018). Keynote, SET Tech Festival (Innogy), Berlin, April 16, 2018 [slideshare][slides-PDF][video]


AI DAOs Series was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

Meet Predictoor: Accountable Accurate Prediction Feeds

https://blog.oceanprotocol.com/meet-predictoor-accountable-accurate-prediction-feeds-8b104d26a5d9

With application to finance, weather, climate, real estate, and more. Powered by Ocean Protocol

Contents
1. Motivation
2. Introduction to Predictoor
- Overview, structure, behavior
3. Predictoor Parameters
- Roadmap, feeds published, pricing
4. How to Earn
- As a predictoor, trader, or data farmer
- From earning to thriving; early benchmarks
5. Architecture
- Privacy & Oasis Sapphire
- Implementation; backend details
6. Possible Futures
7. Conclusion
Appendix: Predictoor Data Farming

Summary

We dream of a world of 10,000 truly accurate prediction feeds, for everything from rain forecasts to sea level rise, or traffic congestion to ETH price. Ocean Predictoor is an on-chain, privacy-enabled, AI-powered application and stack that is bringing this dream to reality. Its testnet is live now, mainnet soon, and incentives program soon after.

1. Motivation

Tomorrow belongs to those who can hear it coming. — David Bowie

Accurate predictions are valuable. With them, one can take action and create value. Conversely, inaccurate predictions can be problematic. Predictions have value because they’re the final step in a data supply chain, right before action is taken by the user.

Prediction feeds are a stream of predictions for a given time series. This could be predicting the price of ETH every 5 minutes, or the sea temperature daily. A feed may be binary, i.e. whether a time series changes up or down: ↑↓↓↓↑↓↑↑. Accurate prediction feeds are valuable.

Alas, accurate predictions are *hard*. Worse, typical prediction feeds have no accountability on accuracy. If the weatherman says “no rain for today” and then it rains, a farmer could get stuck in the mud, wrecking a portion of his crops. The weatherman doesn’t feel the impact of wrong predictions, but the farmers sure care!

Imagine if there was accountability. Accuracy would go up; the farmer would be stuck less. Imagine accountable prediction feeds for not only for rain, but also wind, sea temperature, road congestion, train delays, ETH prices, NVID prices, housing prices, and more. Imagine tens of thousands of prediction feeds with accountable accuracy. Imagine them globally distributed, and censorship resistant. Imagine accuracy improving with time.

This is the dream, that became a goal, that got built. This is Ocean Predictoor.

2. Introduction to Predictoor

Yesterday is gone. Tomorrow has not yet come. We have only today. Let us begin. — A mother

2.1 What is Predictoor?

Ocean Predictoor is a dapp and a stack for prediction feeds. It has accountability for accuracy, via staking. It’s globally distributed and censorship-resistant, by being on-chain. We expect its accuracy to improve over time, due to its incentive structure. Its first use case is DeFi token prediction because users can close the data value-creation loop quickly to make tangible $.

Conceptual overview of Predictoor

Prediction feeds are crowd-sourced. “Predictoor” agents submit individual predictions and stake on them. They make money when they’re correct and lose money when not. This drives accurate prediction feeds, because only accurate predictoors will be making $ and sticking around.

“Trader” agents buy aggregate predictions, then use them to take action like buying or selling. The more accurate the predictions, the more easily they make $, the longer they stick around to keep buying prediction feeds from trading profits.

Predictoor is built on the Ocean Protocol stack, including contracts for tokenized data and middleware to cache metadata. To keep predictions private unless paid for, Predictoor uses Oasis Sapphire privacy-preserving EVM chain.

The initial dapp is live at predictoor.ai. It’s for up/down predictions of BTC, ETH, and other tokens’ prices. The dapp helps users build a mental model of Predictoor behavior. Predictoors and traders’ main workflow is to do run predicting / trading bots with the help of the Py SDK. We have seeded Predictoor with bots that have AI/ML models of accuracy comfortably above 50% — a precondition to make $ trading [0].

Screenshot from predictoor.ai

2.2 Predictoor Structure

The image below gives an overview of Predictoor structure.

In the image top left, predictoors, traders, or anyone play with predictoor.ai to build an understanding how predictoor works. One feed is free; the rest are available for purchase. At first, only the free feed is visible. Users can connect their web3 wallet and buy another feed.

In the image top middle, predictoors graduate to building & deploying Python “Template Predictoor bots” (agents), which submit predictions every 5 minutes. Now, predictoors can see how to make $ from making predictions, with plenty of room to improve AI/ML modeling accuracy and make more $.

Overview of Predictoor structure

In the image top right, traders graduate from predictoor.ai to building & deploying Python “Template Trader bots” (agents), which grab the latest prediction every 5 minutes, as soon as it’s available, then trade using that prediction (and other info). Now, Traders can see how to make $ from buying predictions, with plenty of room to improve trading strategy and make more $.

In the image bottom is the Oasis Sapphire chain, with Predictoor feed contracts deployed to it. There’s one contract deployed for each {pair, exchange, timescale} such as {ETH/USDT, Binance, 5m}.

2.3 Predictoor Behavior

We just covered Predictoor structure. Let’s now layer on some Predictoor behavior, with the help of the image below. We’ll walk through actions by Predictoors and Traders related to predictions for time slot “epoch t+1”, and show how they make $.

We assume predictions on BTC, and where epoch t ends at 5:00pm, t+1 ends 5:05pm, and t+2 ends 5:10pm. We assume that Traders already purchased a subscription via predictoor.ai, Python, or otherwise. When we discuss an action by a Predictoor or Trader, we recognize that it’s typically executed by their agent (bot).

Epoch t. This is left 1/3 of the image. It starts at 4:55pm and ends at 5:00pm. Predictoor 1 (pink) predicts that BTC close price for epoch t+1 will be higher (“↑”) than close price for epoch t. He submits a tx to that chain with that prediction, and some OCEAN stake of his choice (higher stake = more confident). Predictoor 2 (dark green) does the same. Predictoor 3 (light green) predicts “↓” and stakes. The chain stores these prediction values, privately.

Predictoor Behavior

Epoch t+1. The middle 1/3 of the image covers epoch t+1. It starts at 5:00pm and ends at 5:05pm. The BTC Predictoor contract computes the aggregated predicted value (agg_predval) as stake-weighted sum across individual predictions.

agg_predval = (stake1*predval1 + stake2*predval2 + …) / (stake1 + stake2 + …)

The contract then makes agg_predval visible to its subscribers. The predicted value is the stake-weighted sum across predictions. Smart traders may take the information and act immediately. A baseline strategy is “if it predicts ↑ then buy; if it predicts ↓ then sell or short”.

Epoch t+2. This is the right 1/3 of the image. It starts at 5:05pm and ends at 5:10pm. Both traders and trueval agent take action (and, predictoors get paid).

  • Actions by Traders. Typically, traders exit their position immediately, exactly 5 minutes since they got the 5-minute-ahead prediction and acted [1]. If the prediction feed was accurate enough and trading fees & slippage weren’t too high, then the trader makes money on average.
  • Actions by Trueval agent; predictoors get paid. The trueval agent is a process that grabs price feeds from e.g. Binance and submits it to chain, inside the smart contract [2]. The “submit” transaction also takes the opportunity to calculate each Predictoor’s reward or slashing, and update their OCEAN holdings in the contract accordingly. (Funds aren’t sent per se, they’re made available via ERC20 “approve”, for the predictoor to transfer at some later point). Predictoor 3 got his OCEAN slashed because he was wrong; predictoors 1 and 2 receive that as income, in addition to receiving income from prediction feed sales to traders. Predictoors can claim their balance anytime.

3. Predictoor Parameters

3.1 Roadmap

Predictoor will roll out in three phases: Testnet → Mainnet → Data Farming.

  1. Tue Sep 12, 2023: Predictoor Testnet is ready for community. This means Ocean Predictoor smart contracts, middleware, and frontend running on Oasis Sapphire testnet.
  2. Tue Oct 3 [4w later]: Predictoor Mainnet is ready for community. This is like testnet but tokens have real value. There will be bridges as appropriate.
  3. Thu Nov 2 [4w 2d later]: Predictoor Data Farming starts counting. There will be 37,000 OCEAN weekly rewards to incentivize predictoors. The appendix has details.

3.2 Feeds Published

For testnet, there are 10 feeds: X/USDT pair for each of the top-10 coins by market cap (ignoring stablecoins), 5m timescales, on Binance, >0% fees on Binance. Paid feeds. The coins are: X = BTC, ETH, BNB, XRP, ADA, DOGE, SOL, LTC, TRX, DOT

For mainnet, tentatively, there will be 20 feeds: 10 at 5m timescales like above, plus 10 at 60m timescales.

3.3 Pricing

The price to subscribe to one feed for 24 hours is 3.00 OCEAN. This includes all fees.

Fee details:

  • 0.1% community swap fee
  • 20% fee to Ocean Protocol Foundation. (Will be used to further drive Predictoor, and to burn OCEAN.)
  • For reference, price without fees is 2.49791840133 OCEAN. To calculate this: Let x = price without fees. Then x * (1 + 0.20 + 0.001) = 3.0 → x = 3.0 / (1 + 0.20 + 0.001) = 2.49791840133

Pricing is subject to change based on learnings, and feedback from community.

4. How to Earn

4.1 How to Earn as a Predictoor

If you have background in AI, ML, data science or statistics (and these overlap!) then you’re well suited to become a predictoor to make $.

Typical steps as a predictoor:

  1. Play with predictoor.ai. Go to predictoor.ai to build intuition: observe the free feed, perhaps buy a few feeds, and watch them change over time.
  2. Do Challenge DF: one-off predictions. Then, practice making accurate AI/ML based predictions via Challenge Data Farming. Submissions are due Wednesdays at midnight, for ETH price predictions 5 min, 10 min, …, 60 min ahead. Every week, 5000 OCEAN prize money is distributed to the three lowest-error submissions.
  3. Run a predictoor bot: continuous predictions. Follow the steps in the Predictoor README. You’ll start by deploying a bot locally that submits a random prediction every 5 minutes. Then you’ll add AI/ML model predictions. Then you’ll do it on a remote testnet staking fake OCEAN. Finally, you’ll do it on mainnet staking real OCEAN.
  4. Optimize the bot. Improve model prediction accuracy via more data and better algorithms. Extend to predict >1 prediction feeds (Predictoor has many). Wash, rinse, repeat.

The actions as a predictoor give the following ways to earn:

  • Feed sales. At an epoch, sales revenue (minus fees) for that epoch goes to predictoors. It’s distributed pro-rata by stake among the predictoors who predicted the true value correctly. The revenue for an epoch is the fraction of sales, spread uniformly across subscription length. A price of 3 OCEAN, 5m epochs, and 24h (1440m) subscriptions gives a revenue of (# subscribers) * (3 OCEAN) * / (1440m / 5m).
  • Stake reshuffling. At an epoch, incorrect predictoors have their stake slashed. This slashed stake is distributed to the correct predictoors pro-rata on their stake.
  • Predictoor DF. The third phase of Predictoor rollout will have an incentives program that amounts to additional earning for predictoors. 37,000 OCEAN/week rewards.

Don’t expect to be 100% accurate in your up/down predictions. Marginally better than 50% might be enough, and be skeptical if you’re greatly above 50%, you probably have a bug in your testing.

Predictoors can earn even more, via complementary actions:

  • Challenge DF. Predict accurately for weekly prizes, as discussed earlier.
  • Passive DF & Volume DF. Lock veOCEAN for OCEAN, then point the veOCEAN to data assets with high data consume volume (DCV). Predictoor feeds are great candidates for high-DCV assets.

Every week there’s 150,000 total OCEAN rewards for Ocean Data Farming. This will increase in early 2024, and more yet later. Here are details.

⚠️ You will lose money as a predictoor if your $ out exceeds your $ in. If you have low accuracy you’ll have your stake slashed a lot. Do account for gas fees, compute costs, and more. Everything you do is your responsibility, at your discretion. None of this blog is financial advice.

4.2 How to Earn as a Trader

You can make $ by buying prediction feeds, and using it as an input — as “alpha” — to your trading approach.

Typical steps:

  1. Play with dapp, and trade. First, go to predictoor.ai to build intuition: observe the free feed, perhaps buy a few feeds, and watch them change over time. In a second window, have Binance open. Employ a baseline trading strategy: when a new Predictoor prediction pops in, buy if “↑”, and sell or short if “↓”; exit the position 5 min later.
  2. Run a trader bot. Follow the steps in the Trader README. You’ll start by deploying a bot locally that follows the baseline trading strategy: when a new Predictoor prediction pops in, it buys if “↑”, and sells if “↓”. Then you’ll do it on a remote testnet with fake tokens. Finally, you’ll do it on mainnet with real tokens on a real exchange.
  3. Improve & extend. Improve trading performance via more sophisticated trading strategies. This is a universe all of its own! Extend to >1 prediction feeds (Predictoor has many). Wash, rinse, repeat.

The actions as a trader offer a single yet powerful way to earn: trading revenue. Buy low and sell high! (And the opposite with shorting.)

Traders can earn even more via a complementary action: Volume DF. Lock veOCEAN for OCEAN, then point the veOCEAN to data assets with high DCV. Predictoor feeds are great candidates for high-DCV assets.

⚠️ You will lose money trading if your $ out exceeds your $ in. Do account for trading fees, order book slippage, cost of prediction feeds, and more. Everything you do is your responsibility, at your discretion. None of this blog is financial advice.

4.3 How to Earn as a Data Farmer

Even if you’re not active as a predictoor or a trader, you can earn nonetheless. By simply locking your OCEAN for veOCEAN, you can earn passive rewards. Moreover, if you point your veOCEAN to data assets that have high data consume volume (Predictoor assets are a good choice), then you can earn “Volume DF” active rewards.

4.4 From Earning, to Thriving

Why did we, at Ocean Protocol, build Predictoor? Why did we structure it around predictions, for predictoors and traders to earn? Can earning grow into thriving? We explain here.

Ocean Protocol’s mission is to level the playing field for data and AI, by kickstarting an open data economy. We’ve asked: How do people sustain and thrive in the emerging open data economy? Our answer is simple: ensure that they can make money!

However, this isn’t enough. We need to dive deeper. The next question is: How do people make money in the open data economy? Our answer is: create value from data, make money from that value, and loop back and reinvest this value creation into further growth. We call this the Data Value-Creation Loop.

The data value-creation loop

If we unroll the loop, we get a data value supply chain. In most supply chains, the most value creation is at the last step, right before the action is taken. Would you rather a farmer in Costa Rica selling a sack of coffee beans for $5, or Starbucks selling 5 beans’ worth of coffee for $5? It’s the same for data. For data value supply chains, the most value creation in the prediction step.

Predictoor is a direct manifestation to make this happen. Predictoor aims to make it easy for people to make $ doing predictions (as predictoors), and taking actions against those predictions (as traders). If traders make $ with sufficiently accurate feeds, then they’ll keep buying the feeds, and then the predictoors will make $. As the predictoors pursue higher accuracy, they’ll buy their own data feeds, ie working backwards in the data supply chain. Over time, the whole data supply chain will flesh out with $ at each step.

And the starting point is predictions.

4.5. Early Earnings Benchmarks

As discussed, Predictoor aims to make it easy for people to make $ doing predictions (as predictoors), and taking actions against those predictions (as traders).

Before we started building Predictoor, we first asked: can we predict ETH (etc) up/down with accuracy? Then we conducted intensive AI/ML research towards this question. The results of this investigation were positive 😎. TBH, we were surprised at the degree of market inefficiency.

Our next question was: with these predictions, can we make $ trading? Again, we conducted intensive AI/ML research, and again found positive results 😎😎.

Here we share some results of that research — a glimpse of how deep the rabbit hole goes — to inspire would-be predictoors and traders in their own work. The model was trained on data from January 1, 2021 to June 30, 2023, with simulated results the first 24 days of July 2023. A “baseline” trading strategy was used:

The image below shows simulated returns as a function of order size, for BTC/TUSD on Binance. Duration = 7000 ticks x 5m/tick = 24.3 days of trading. It simulates spread effects. It assumes 0% fees. Note how the size of the order affects the return. This is because BTC/TUSD is not very liquid; therefore larger amounts cause slippage.

Simulated returns vs time of BTC/TUSD trading on Binance. Trade size has an impact.

The image below has the same experimental setup, but for BTC/USDT pair. The size of the order does not affect the return, because BTC/USDT is more liquid than BTC/TUSD.

Simulated returns vs time of BTC/USDT trading on Binance. Trade size has little impact.

5. Predictoor Architecture

5.1 Privacy and Oasis Sapphire

Predictoor needs privacy for:

  1. Submitted predictions
  2. Compute aggregate predictions; and
  3. Aggregated predictions — only subscribers can see

This could all be done on fully-centralized infrastructure. But doing so would fail on our other goals: being globally distributed, censorship resistant, and non-custodial.

Targeting these needs, we researched & prototyped many privacy technologies. Oasis Sapphire emerged as the best choice because, as the only privacy-preserving EVM chain in production, it could handle these needs cleanly end-to-end.

5.2 Software Implementation

Most of predictoor.ai dapp is implemented in the pdr-web repo, with help from pdr-websocket to fetch feed data and Ocean Aquarius for metadata caching.

The template Predictoor bots, trader bots, trueval bot, and prediction feed publishing are all in the pdr-backend repo. Contracts are in the Ocean contracts repo.

Events emitted by contracts are indexed as Ocean subgraphs, to be consumed by predictoor.ai and the bots. The backend subgraph README has more info.

5.3 Backend & Contract Details

The image “Overview of Predictoor structure” presented earlier showed much of the Predictoor architecture. The image below adds detail around the backend (bottom 1/3 of diagram). Let’s discuss.

Details of Predictoor structure (Architecture)

Smart Contracts. There’s one Predictoor contract for each prediction feed, at each exchange/timescale: BTC/USDT at Binance/5m, ETH/USDT at Binance/5m, and so on. Each contract is an Ocean datatoken contract, with a new template for prediction feeds [3]. It implements ERC20, Ocean, and Predictoor-specific behavior as follows.

  • ERC20 behavior. It implements the ERC20 interface and therefore plays well with ERC20-friendly crypto wallets, DEXes, etc.
  • Ocean behavior. Being part of Ocean, having 1.0 datatokens means you can access the underlying data asset for the duration of the subscription (once you’ve initiated the order). For Predictoor contracts this is 24h. Each datatoken contract has a parent Ocean data NFT with metadata, means to specify & collect fees, and more.
  • Predictoor behavior. Each datatoken contract has additional methods specific to Predictoor: submitting predictions, submitting truevals, computing aggregated predictions, etc.

Testnet. Predictoor is initially deployed to Oasis Sapphire testnet. Staking & payment is in (fake) OCEAN tokens; gas fees are in (fake) ROSE tokens. Users get both (fake) tokens via faucets.

Mainnet, tokens, bridges. Predictoor will be deployed to Oasis Sapphire mainnet soon. In this case one must use real OCEAN tokens, which bridge from ETH mainnet; and real ROSE tokens, which bridge from Oasis Emerald chain. We’ll share more details when Predictoor officially becomes available on Sapphire mainnet (step 2 of Predictoor roadmap).

6. Possible Futures

This first release of Predictoor is just the beginning. We anticipate adding many more crypto feeds: more coins, more exchanges (CEX and DEX), more timescales; and predictions beyond just coin prices.

We envision the Predictoor stack being used in other verticals like real estate, weather, climate, transportation, and agriculture. We envision refinements of the core Predictoor technology.

Predictoor’s predictions are the final step of the web3 data value creation pipeline, right before action is taken and $ is made. We anticipate that the $ made by Predictoors will filter upstream, into work to create feeds used by Predictoors, such as sentiment analysis, volatility, and information ticks.

Most interestingly, the agents of the Predictoor ecosystem may evolve. Starting as relatively simple agents running on the centralized cloud, Predictoor’s incentives can drive them to becoming decentralized and autonomous, while performing AI. Yes, AI DAOs. Swarms of sovereign agents: some predicting, some trading, and some providing support services along the data supply chain. It will feel a bit like nature, a bit out of control, and wholly exciting!

Left: a swarm/agent-based AI DAO architecture. Right: book cover for Out of Control by Kevin Kelly

7. Conclusion

We can only see a short distance ahead, but we can see plenty there that needs to be done.

— Alan Turing

We dream of a world of 10,000 truly accurate prediction feeds, for everything from rain forecasts to sea level rise, or traffic congestion to ETH price. Ocean Predictoor is an on-chain, privacy-enabled, AI-powered application and stack that is bringing this dream to reality.

Predictoor’s testnet is live now, mainnet soon, and incentives program soon after. We anticipate Predictoor to extend beyond DeFi to other verticals like climate and agriculture. Predictoor agents may evolve into AI DAOs with emergent swarm-like behavior. This is the future.

Appendix: Predictoor Data Farming

There will be 37,000 OCEAN weekly rewards to incentivize predictoors.

Here are details. Ocean Data Farming (DF) is an incentives program currently with 150K OCEAN rewards per week.

DF streams & sub-streams are:

  1. Passive DF. 75K OCEAN per week. Lock OCEAN for veOCEAN; rewards are pro-rata to veOCEAN holdings.
  2. Active DF. 75K OCEAN per week. It has these substreams:
    i. Volume DF. Allocate veOCEAN towards data assets with high Data Consume Volume (DCV), to earn more. Rewards are a function of DCV and veOCEAN stake.
    ii. Challenge DF. Predict future ETH price, one-time weekly. Rewards are a function of accuracy. Details here.
    iii. Predictoor DF [new]. Predict future price movement, continuously (as a predictoor).
    – Current weekly rewards are 70K / 5K OCEAN for Volume / Challenge DF respectively. When Predictoor DF starts (Nov 2), weekly rewards will be 37K /1K / 37K OCEAN for Volume / Challenge / Predictoor DF respectively.

About Ocean Protocol

Ocean Protocol was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data.

The Ocean Protocol core team builds the Ocean Predictoor product.

About Oasis Network

The Oasis Network is a privacy-focused, scalable, Proof-of-Stake Layer-1 smart contract platform that is Ethereum Virtual Machine compatible. Oasis boasts a multi-layer modular architecture that enables the scalability and flexibility to deploy low-cost privacy-focused smart contracts toward Web3 ideals.

Oasis Network includes Oasis Emerald chain, and Oasis Sapphire chain. The latter is privacy-preserving via Trusted Execution Environments (TEEs).

Notes

[0] On testnet, the Predictoor bots run by Ocean core team make random predictions. When mainnet goes live, Predictoor bots run by core team are AI/ML-powered.

[1] They might have also exited earlier if price spiked (“take profits” action) or price was crashing (“stop loss” action).

[2] Submitting a “true” price value like this could also have been performed using a Chainlink setup (or otherwise). However we wanted to retain flexibility for feeds not currently on Chainlink, for now.

[3] The implementation is in templates/ERC20Template3.sol at at Ocean’s contracts repo.

Final Note

None of the content in this post should be taken as financial advice. Everything you do is your responsibility, at your discretion.

Follow Ocean Protocol on Twitter or Telegram to keep up to date. Chat directly with the Ocean community on Discord. Or, track Ocean progress directly on GitHub.


Meet Predictoor: Accountable Accurate Prediction Feeds was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

The Data Value-Creation Loop

https://blog.oceanprotocol.com/the-data-value-creation-loop-68e23575be02?source=rss----9d6d3078be3d---4

Thrive in the open data economy by closing the loop towards speed and value

The Data Value-Creation Loop

Motivation

The core infrastructure is in place for an open data economy. Dozens of teams are building on it. But it’s not 100% obvious for teams how to make $.

We ask:

How do people sustain and thrive in the emerging open data economy?

Our answer is simple: ensure that they can make money!

However, this isn’t enough. We need to dive deeper.

The Data Value-Creation Loop

The next question is:

How do people make money in the open data economy?

Our answer is: create value from data, make money from that value, and loop back and reinvest this value creation into further growth.

We call this the Data Value-Creation Loop. The figure above illustrates.

Let’s go through the steps of the loop.

  • At the top, the user gets data by buying it or spending $ to create it.
  • Then, they build an AI model from the data.
  • Then they make predictions. E.g. “ETH will rise in next 5 minutes”
  • Then, they choose actions. E.g. “buy ETH”).
  • In executing these actions, they data scientist (or org) will make $ on average.
  • The $ earned is put back into buying more data, and other activities. And the loop repeats.

In this loop, dapp builders can help their users make money; data scientists can earn directly; and crypto enthusiasts can catalyze the first two if incentivized properly (e.g. to curate valuable data).

If we unroll the loop, we get a data value supply chain. In most supply chains, the most value creation is at the last step, right before the action is taken. Would you rather a farmer in Costa Rica selling a sack of coffee beans for $5, or Starbucks selling 5 beans’ worth of coffee for $5? It’s the same for data.

To the question “How do people make money in the open data economy?”, the “create value from data!” almost seem like a truism. Don’t fool yourself. It’s highly useful in practice: focus only on activities that fully go through the data value-creation loop.

However, this is still too open-ended. We need to dive deeper.

Which Vertical? How To Compare Opportunities

There are perhaps dozens of verticals or hundreds of possible opportunities of creating and closing data value-creation loops. How to select which? We’ve found that two measuring sticks help the most.

Key criteria:

  1. How quickly one can go through the data value-creation loop?
  2. What’s the $ size of the opportunity

For (2), it’s not just “what’s the size of the market”, it’s also “can the product make an impact in the market and capture enough value to be meaningful”.

We analyzed dozens of possible verticals with according to these criteria. For any given data application, the loop should be fast with serious $ opportunity.

Here are some examples.

  • Small $, slow. Traditional music is small $ and slow, because incumbents like Universal dominate by controlling the the back catalogue.
  • Large $, slow. Medicine is large $ but slow, due to the approval process.
  • Small $, fast. Decentralized music is fast but small $ (for now! Fingers crossed).

We want: large $, fast. Here are the standouts.

  • Decentralized Finance (DeFi) is a great fit. One can loop at the speed of blocks (or faster), and trade volumes have serious $.
  • LLMs and modern AI is close: one can loop quickly, and with the right application make $. The challenge is: what’s the right application?

On Ocean Protocol Strategy

At Ocean, we encourage our ecosystem collaborators to close the data value-creation loops, with maximum $ and speed. We follow our advice for internal projects too. Accordingly, a lot of our focus is on the DeFi and LLMs / modern AI. These are natural for us because Ocean is natively a Web3 and an AI project.

Loops, then scale. Once one or two fast & high $ data value-creation loops have been established on Ocean, where people are sustainably making money, we’ll likely adjust our activities to scale those loops up.

Ubiquity. Our aim is to grow over the long-term, until Ocean is ubiquitous as a tool to level the playing field on data and AI.

Summary

To sustain and thrive in the open data economy: make money!

Do this by closing the data value-creation loop, in a vertical / opportunity where you can loop quickly and the $ opportunity is large.

Notes

This content was originally published in “Ocean Protocol Update || 2023”, Mar 10, 2023. To make the content evergreen-useful and for easier reference, we extracted and adapted it into this standalone article.

Follow Ocean Protocol on Twitter or Telegram to keep up to date. Chat directly with the Ocean community on Discord. Or, track Ocean progress directly on GitHub.


The Data Value-Creation Loop was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

Introducing Challenge Data Farming

https://blog.oceanprotocol.com/introducing-challenge-data-farming-378bba28fc97?source=rss----9d6d3078be3d---4

Predict future ETH prices, for 5000 OCEAN in prizes weekly. Now part of Ocean DF

Contents
1. Abstract
2. Introduction
3. DF, Updated
4. Challenge DF Details
5. FAQ
6. Conclusion

1. Abstract

Ocean Protocol’s monthly “Predict-ETH” data challenges have become weekly “Challenge DF” challenges, as part of Ocean Data Farming (DF). Each week has 5,000 OCEAN total in prizes towards those who predict the price of ETH with the lowest error. It’s part of Active DF; Passive DF is unchanged.

Predict ETH price accurately to win! Weekly competitions and prize $. The first challenge is live now. Get started here.

2. Introduction

2.1 Background: Ocean Data Challenges

Ocean Data Challenges are data science competitions run by the Ocean core team with $thousands in prize money. Past competitions include Dubai real estate prediction, OCEAN Twitter sentiment analysis, Catalunya air pollution, and more. There’s a steady pipeline of future challenges. Most competitions are judged on a combination of subjective merit (e.g. data analysis) and objective merit (e.g. error).

2.2 Background: Predict-ETH Data Challenges

There’s great value in being able to predict ETH well. It helps to make $ in buying / selling ETH, DeFi trading, yield farming or DeFi protocol development. Such predictions — if accurate — would be of high interest to buy, e.g. as a datafeed within Ocean ecosystem.

Accordingly, we’ve encouraged the Ocean community to get good at predicting ETH. Specifically, within Ocean Data Challenges, we’ve held monthly “Predict-ETH” Challenges, running from October 2022 to July 2023 (7 total). Participants had to predict the ETH price into the near-term future: 5 min, 10 min, …, 60 min ahead. Reward $ goes to those with lowest prediction error. Error is a 100% objective measure.

2.3 Background: Ocean Data Farming

Ocean Data Farming (DF) is a rewards program that incentivizes for growth of Data Consume Volume (DCV) [1] in the Ocean ecosystem. DF is like DeFi liquidity mining, but tuned to drive DCV. DF emits OCEAN weekly for (a) passive rewards and (b) active rewards.

We launched DF in June 2022, and arrived at DF Main phase in March 2023.

2.4 DF Evolution

We’d set expectations that we may tune DF to include data competitions [2][3]. Now, that’s happening.

We’ve moved the ETH prediction competitions from Data Challenges framework to Ocean DF framework for two reasons. First, the competitions were 100% objective, and so could be automated to lower administration overhead; DF already had relevant weekly ops to support such a competition. Second, it sets up DF for further evolution related to predictions, stay tuned;)

3. DF, Updated

This section describes how DF is updated, in terms of structure, budgets, and how people can earn OCEAN in DF.

3.1 DF Structure, Updated

Challenge DF is a new substream of Active DF [4].

Here’s the updated structure of DF:

  • Passive DF. Lock OCEAN for veOCEAN, for passive rewards. Reward depends on veOCEAN amount, which in turn depends on amount of OCEAN locked and for how long.
  • Active DF. Substreams:
    Volume DF. Here, people curate data by pointing their veOCEAN towards data assets. Reward depends on asset DCV, veOCEAN amount, and more. (This is what was previously simply called Active DF; now it’s one substream of Active DF.)
    Challenge DF. Here, people predict future ETH price, and rewards are a function of prediction accuracy. (This is the new substream of active DF.)

3.2 DF Budgets, Updated

The DF budget change is mild: 5,000 OCEAN weekly that was going towards Volume DF, is now going towards Challenge DF. Everything else stays the same. Let’s elaborate.

We’re currently in DF Main 1, with 150,000 OCEAN per week in rewards. This gets allocated as follows.

  • Passive DF: 75,000 OCEAN (50%)
  • Active DF: 75,000 OCEAN (50%), allocated as:
    Volume DF substream: 70,000 OCEAN
    Challenge DF substream: 5,000 OCEAN

There is no change to how DF Main ratchets up OCEAN rewards over time. Here’s a summary. In Mar 2024, rewards increase 2x to 300,000 OCEAN. Six months later, another 2x to 600,000 OCEAN. Six months later, another near-2x to 1.1M OCEAN/week, then decaying over time. It’s always a 50/50 split between passive/active DF. This post has details.

3.3 Earning DF Rewards, Updated

There are three ways to earn and claim rewards: passive DF (like before), Active DF : Volume DF (like before), and Active DF : Challenge DF (new).

  • Passive DF. To earn: lock OCEAN for veOCEAN, via the DF webapp’s veOCEAN page. To claim: go to the DF Webapp’s Rewards page; within the “Passive Rewards” panel, click the “claim” button. Ocean docs has details.
  • Active DF
    Volume DF substream. To earn: allocate veOCEAN towards data assets, via the DF webapp’s Volume DF page. To claim: go to the DF Webapp’s Rewards page; within the “Active Rewards” panel, click the “claim” button (it claims across all Active DF substreams at once). Ocean docs has details.
    Challenge DF substream. To earn: make ETH predictions for specific times; the next section gives an overview of how. To claim: go to the DF Webapp’s Rewards page; within the “Active Rewards” panel, click the “claim” button (it claims across all Active DF substreams at once). Challenge DF README has detailed instructions.

Expect further evolution in Active DF: additional substreams, tuning substreams, and budget adjustments among substreams. What remains constant is passive DF, and total OCEAN rewards emission schedule.

4. Challenge DF Details

4.1 Punchline

You can participate in Challenge DF by making ETH predictions for specific times. The Challenge DF README has detailed instructions [5]. We share key aspects here.

4.2 Key dates

Each Challenge DF competition is weekly, in line with the rest of DF.

  • Submission deadline: Every Wednesday at 23:59 UTC when the Data Farming round finishes.
  • Prediction at times: Every Thursday at 00:05 UTC, 00:10, …, 1:00 (12 predictions total).

Challenge DF competitions start now. Therefore the first submission deadline is Wednesday Aug 2, 2023 at 23:59 UTC, as part Data Farming Round 48 (DF48).

4.3 Criteria to Win

The winner = whoever has lowest prediction error (normalized mean-squared error, or NMSE).

To be eligible, competitors must produce the outcomes that the README guides. This includes:

  • Created an Ocean data NFT
  • Filled it with (encrypted) predictions
  • Transferred it to the Ocean judges before the deadline
  • And a bit more; the README has full info

4.4 Challenge DF Prizes

Weekly Prize Pool: 5,000 OCEAN

  • 1st place: 2,500 OCEAN
  • 2nd place: 1,500 OCEAN
  • 3rd place: 1,000 OCEAN

4.5 Winners Reward Distribution

We will identify winners by the blockchain address they use in the competition (on Mumbai).

Rewards get distributed as part of DF rewards: we allocate OCEAN to winners’ accounts on Ethereum mainnet as part of Active Rewards contract.

If you are a competitor and want to know if you won, go to the DF Webapp “Challenge DF” section and see the listing of winners for each previous DF Round.

To claim your reward, go to DF Webapp “Rewards” page. Then, within the “Active Rewards” panel, click the “claim” button. This will claim rewards across all Active DF substreams at once.

5. FAQ

  • Q: Data Farming is supposed to be about driving DCV. How does Challenge DF help?
  • A: ETH price predictions — if accurate — make for price feeds that people will readily pay for to trade against and more. Challenge DF catalyzes creating such feeds, and has more streamlined ops than the previous Predict-ETH Data Challenges.
  • Q: Are Challenge DF rewards a function of stake, e.g. veOCEAN held?
  • A: No. Challenge DF rewards are only a function of accuracy. (For now.)
  • Q: Predict-ETH had a blog post announcing each challenge, and another post announcing winners. Will you have the same here?
  • A: Partially. There is already a weekly DF blog post that wraps up the previous DF round and gives parameters for the next DF round. To keep ops streamlined, the blog post will not list winners. If you had submitted to Challenge DF and want to know whether you won, go to the DF Webapp “Challenge DF” section and see the listing of winners for each previous DF Round. Your winnings are claimable as part of regular Active DF rewards.
  • Q: Going from Predict-ETH Data Challenges to Challenge DF, does the prize pool stay the same?
  • A: Approximately, yes. The changes are (a) weekly not monthly, and (b) denominated in OCEAN not USD. Let’s elaborate. Each monthly Predict-ETH competition had total 5000 USD worth of OCEAN worth of prizes. Dividing that by 4 (approx. 4 weeks in a month) gives us 1250 USD per week. Since DF is automated, it’s cleaner to calculate everything in OCEAN not USD. With current OCEAN price about 0.35 USD, then 1250 USD is 3571 OCEAN. We round up to 5000 OCEAN.
  • Q: What’s the difference between “Predict-ETH” and “predict ETH”?
  • A: “Predict-ETH” refers to the specific set of Ocean Data Challenges to predict the price of ETH. “predict ETH” is simply a verb combined with the “what”, around someone predicting the price of ETH; it’s not referring to any given product.

6. Conclusion

Challenge DF is a new substream of Active DF, which rewards 5,000 OCEAN total in prizes towards those who predict the price of ETH with the lowest error. Expect Active DF to evolve further, towards driving DCV in Ocean ecosystem. Passive DF and total OCEAN rewards emissions are unchanged and will remain so.

Predict ETH price accurately to win! Weekly competitions and prize $. Get started here.

Notes

[1] Data Consume Volume (DCV) is the USD$-denominated amount spent to purchase data assets and consume them, for a given time period (e.g. one week)

[2] Expectation setting in the DF Main launch post, section 4.4 “DF Evolution”: “With each weekly cycle, the DF core team may tune the Reward Function or make other changes based on learnings. After DF Main is shipped, the DF core team will expand scope. Current plans are … DF Crunch Program — Kaggle-style data science competitions. Initially, there will be a weekly reward to the best predictor of OCEAN.”

[3] Expectation setting in “DF Main is Here” post, section 4.4: “Competition Data Farming. At some point, some of the Ocean-run competitions (e.g. Predict-ETH) can get streamlined & automated enough to put into weekly DF ops.”

[4] Where in DF should Challenge DF fit? One constraint, out of our commitment to the OCEAN community, was “don’t touch passive DF”, which has 50% of the weekly DF rewards budget. In contrast, Active DF was always meant to evolve, all towards driving DCV. This is why we made Challenge DF a new substream of Active DF.

[5] Challenge DF’s README may evolve week-to-week. Don’t expect this blog post to reflect those changes. Rather, treat the README as the “source of truth” for competition

Follow Ocean Protocol on Twitter, Telegram, or GitHub. And chat directly with the Ocean community on Discord.


Introducing Challenge Data Farming was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

Ocean Token Model || 2023

https://blog.oceanprotocol.com/ocean-token-model-2023-2f306932f34a?source=rss----9d6d3078be3d---4

A summary of the mechanics of OCEAN, circa Jun 2023

[Image: freerangestock.com]

Summary

OCEAN mechanics currently include locking OCEAN for curation of data assets, burning OCEAN as a function of network revenue, and as a unit-of-exchange.

OCEAN has a generalized design that can incorporate new features, towards the overall goal of growing traction.

Introduction

OCEAN is designed to increase with a rise in usage volume [1]:

  • A rise in usage volume → leads to more network revenue, which goes to Ocean community via (a) burning OCEAN (b) other community initiatives
  • A rise in usage volume → leads to more OCEAN being locked

So the name of the game is to drive usage volume.

On Network Revenue

The baseline revenue: anytime a data asset is consumed, 0.1% of the value goes to Ocean community; a portion of that is burned. It’s independent of what currency the data asset it sold for. The higher the data consume volume (DCV), the more OCEAN to the community, the more value to OCEAN. So the name of the game is to drive DCV [1].

Over time, other revenue-generating mechanisms may be added.

OCEAN Distribution Towards Traction

The majority of OCEAN supply is earmarked for the community, distributed over decades, to incentivize locking OCEAN and driving DCV.

Ocean Data Farming (DF) [2][4] has the main mechanics:

  • Passive DF. Lock your OCEAN for veOCEAN → earn weekly OCEAN rewards .
  • Active DF. Currently via curation: Point your veOCEAN to assets that you think will have high DCV, and if they do, you earn OCEAN rewards. DCV isn’t that high (yet), so “active” DF rewards aren’t that high (yet).

Unit of Exchange

OCEAN can also be used as a unit-of-exchange, as it is in Ocean Market [1].

However, do understand that this mechanic doesn’t drive value much, as Chris Burniske’s PV=MQ work showed. Thus, the Ocean core team puts more emphasis on greater drivers of value.

Looking Forward

Inside Ocean core team, all our work is around driving traction, and especially DCV [5]. Stay tuned, some cool stuff is coming, including more revenue streams:)

And there are several teams in the Ocean community, building great products powered by Ocean, which each drive OCEAN as they drive DCV.

References

0. OCEAN token page. https://oceanprotocol.com/token

1. “Ocean Token Model”. Oct 2020. https://blog.oceanprotocol.com/ocean-token-model-3e4e7af210f9. It lays out the backbone of OCEAN which is as relevant as ever. It doesn’t reflect details added in since then: veOCEAN, shipping Data Farming, winding down datatoken pools, and budget shifting from grants to Data Farming.

2. “veOCEAN is Launching, Data Farming is Resuming”. Sep 2022. Introduction of veOCEAN, and active/passive Data Farming. https://blog.oceanprotocol.com/veocean-is-launching-data-farming-is-resuming-abed779211e3

3. “OceanDAO Is Going Fully Decentralized and Autonomous”. Oct 2022. Shifting the budget from grants towards DF. https://blog.oceanprotocol.com/oceandao-is-going-fully-decentralized-and-autonomous-cb4b725e0360

4. “Ocean Data Farming Main is Here”. Mar 2023. Up-to-date description of DF. https://blog.oceanprotocol.com/ocean-data-farming-main-is-here-49c99602419e

5. “Ocean Protocol Update || 2023”. Mar 2023. Description of the core team’s plans for 2023. https://blog.oceanprotocol.com/ocean-protocol-update-2023-44ed14510051


Ocean Token Model || 2023 was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

Predict-ETH Round Data Challenge 5 is Live

https://blog.oceanprotocol.com/predict-eth-round-data-challenge-5-is-live-d42ca8cf6af0?source=rss----9d6d3078be3d---4

$5000 in prizes available to accurately predict ETH 60 minutes into the future

Summary: Follow the Predict-ETH Round 5 README to submit predictions, and win $$.

About Predict-ETH Competition

Calling all Oceaners and data scientists! How accurately can you predict ETH?

Ocean’s Predict-ETH data challenge is an exciting opportunity for data scientists to showcase their skills and potentially win big. In this competition, you must build a model that can accurately predict the price of Ether (ETH).

It uses Ocean Protocol’s open-source ocean.py library.

Participants retain full control over any model they build for the competition, enabling them to monetize their work through the competition, trade using their model, or sell access to the model as a data feed on the Ocean Market!

Some contestants in previous Predict-ETH rounds had excellent predictions. Can you one-up them?

Prizes

Prize Pool: $5,000 USD worth of OCEAN

  • 1st place: $2,500
  • 2nd place: $1,500
  • 3rd place: $1,000

Setup

See the challenge requirements, and submit your predictions all on the Predict-ETH Round 5 README.

You can use any data you wish — static data or streams, free or priced, raw data or feature vectors, published in Ocean or not. The top-level README links to data feeds and AI modeling approaches that you may find helpful in the challenge’s.

The submission deadline is May 3, 2023 at 23:59 UTC.

You must submit predictions (5+1) minutes, 10 minutes, minutes, 15 minutes, 20 minutes, …, 55 minutes, 60 minutes in the future.

(⚠️ This is different than Rounds 1–4 which required predictions 1 hour, 2 hours, …, 12 hours into the future.)

Evaluation Criteria

The winner = whoever has lowest prediction error (normalized mean-squared error, or NMSE). That’s all. There is no subjectivity.

To be eligible, competitors must produce the outcomes that the Round 5 README guides.

  • ✔️ Signed up to Desights
  • ✔️Created an Ocean data NFT and filled it with (encrypted) predictions. Transferred it to the Ocean judges before the deadline
  • And more. The README has details.

(This is different than Rounds 1–3 which required submitting a presentation).

Developer Support, Workshop, Chat

Support. If you encounter issues, feel free to reach out:

Workshop. We will host a Predict-ETH workshops to walk through theREADMEs and hold Q&A with our core team.

Conclusion

Follow the Predict-ETH Round 5 README to submit predictions, and win $$.

About Ocean Protocol

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data.

Follow Ocean on Twitter or Telegram to keep up to date. Chat directly with the Ocean community on Discord. Or, track Ocean progress directly on GitHub.

About DesightsAI

Desights is a web3 platform to crowdsource solutions to toughest Data and AI challenges. Organizations/DAOs can fund these challenges and invite brightest minds in the Data and AI industry to solve these challenges. Apart from being a Data challenge platform, Desights aims to be a decentralized community of Data Scientists and like-minded individuals, where new innovative ideas are discussed and new collaborations are fostered.


Predict-ETH Round Data Challenge 5 is Live was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

Getting Started with ocean.py, Part 4: Walkthrough

https://blog.oceanprotocol.com/getting-started-with-ocean-py-part-4-walkthrough-4807b21083a6?source=rss----9d6d3078be3d---4

Introduction

ocean.py is a Python library to privately & securely publish, exchange, and consume data, using Ocean Protocol.

Part 1 of this series introduced ocean.py, and described how to install it. Part 2 described how to set up for local testing, and part 3 for remote.

This post is a fun one! We pick up where part 2 & 3 left off, and walk you through the main flow:

In this post, you’ll publish a data asset, post for free / for sale, dispense it / buy it, and consume it. We’ll closely follow ocean.py’s main-flow.md.

Background: Ocean Basics

This post assumes a basic understanding of Ocean data NFTs and datatokens. If you don’t yet know about these, we recommend learning about those first, by text or video.

https://medium.com/media/7677c4d056104531780fd36eeafe7e15/href

Main Flow: Video

Here’s a video version this post 👇. Or, jump straight into the text content below.

https://medium.com/media/1c23dca8a56ef57b325e47de2c46c946/href

Main Flow: Text

Steps in the flow:

  1. Alice publishes dataset
  2. Bob gets access to the dataset (faucet, priced, etc)
  3. Bob consumes the dataset

Let’s go!

(Note: this blog post is up-to-date as of Apr 2023, but if 2+ months have passed, we recommend using the READMEs directly for up-to-date instructions. )

1. Alice publishes dataset

Alice! Alice! Who the f*** is Alice?

Well, for the steps below, you are Alice! 👧

In the same Python console:

#data info
name = "Branin dataset"
url = "https://raw.githubusercontent.com/trentmc/branin/main/branin.arff"

#create data asset
(data_nft, datatoken, ddo) = ocean.assets.create_url_asset(name, url, {"from": alice})

#print
print("Just published asset:")
print(f" data_nft: symbol={data_nft.symbol}, address={data_nft.address}")
print(f" datatoken: symbol={datatoken.symbol}, address={datatoken.address}")
print(f" did={ddo.did}")

You’ve now published an Ocean asset!

  • data_nft is the base (base IP)
  • datatoken for access by others (licensing)
  • ddo holding metadata

(For more info, see Appendix: Publish Details.)

2. Bob gets access to the dataset

If you’ve ever spent time in the world of crypto, or cryptography:

Wherever there’s a Bob, there’s always an Alice

OK, OK. So yes, there’s a Bob. He wants to consume the dataset that Alice just published. The first step is for Bob to get 1.0 datatokens.

Below, we show four possible approaches:

  • A & B are when Alice is in contact with Bob. She can mint directly to him, or mint to herself and transfer to him.
  • C is when Alice wants to share access for free, to anyone
  • D is when Alice wants to sell access

In the same Python console:

from ocean_lib.ocean.util import to_wei

#Approach A: Alice mints datatokens to Bob
datatoken.mint(bob, to_wei(1), {"from": alice})

#Approach B: Alice mints for herself, and transfers to Bob
datatoken.mint(alice, to_wei(1), {"from": alice})
datatoken.transfer(bob, to_wei(1), {"from": alice})

#Approach C: Alice posts for free, via a dispenser / faucet; Bob requests & gets
datatoken.create_dispenser({"from": alice})
datatoken.dispense(to_wei(1), {"from": bob})

#Approach D: Alice posts for sale; Bob buys
# D.1 Alice creates exchange
price = to_wei(100)
exchange = datatoken.create_exchange({"from": alice}, price, ocean.OCEAN_address)

# D.2 Alice makes 100 datatokens available on the exchange
datatoken.mint(alice, to_wei(100), {"from": alice})
datatoken.approve(exchange.address, to_wei(100), {"from": alice})

# D.3 Bob lets exchange pull the OCEAN needed
OCEAN_needed = exchange.BT_needed(to_wei(1), consume_market_fee=0)
ocean.OCEAN_token.approve(exchange.address, OCEAN_needed, {"from":bob})

# D.4 Bob buys datatoken
exchange.buy_DT(to_wei(1), consume_market_fee=0, tx_dict={"from": bob})

You may not realize it, but you just covered a crazy amount of ground!! You just went through four different ways to share data! Minting, transferring, faucets, and selling!

https://medium.com/media/19aa443e87637ed88c5a03d0c871f61a/href

What made this possible is that the access control to the data is tokenized, via datatokens. Then you were basically using the tools of crypto to move those datatokens around.

(For more info, see Appendix: Dispenser / Faucet Details and Exchange Details.)

3. Bob consumes the dataset

Bob now has the datatoken for the dataset! Time to download the dataset and use it.

In the same Python console:

# Bob sends a datatoken to the service to get access
order_tx_id = ocean.assets.pay_for_access_service(ddo, {"from": bob})

# Bob downloads the file. If the connection breaks, Bob can try again
asset_dir = ocean.assets.download_asset(ddo, bob, './', order_tx_id)
import os
file_name = os.path.join(asset_dir, "file0")

Let’s check that the file is downloaded. In a new console:

cd my_project/datafile.did:op:*
cat file0

The beginning of the file should contain the following contents:

% 1. Title: Branin Function
% 3. Number of instances: 225
% 6. Number of attributes: 2
@relation branin
@attribute 'x0' numeric
@attribute 'x1' numeric
@attribute 'y' numeric
@data
-5.0000,0.0000,308.1291
-3.9286,0.0000,206.1783
...

(For more info, see Appendix: Consume Details.)

Conclusion / Next Step

You’ve now done a walk-through of the main flow, congrats! As Alice, you published a data asset, posted for free / for sale, and had that Bob guy dispense it / buy it and consume it.

Have questions? Here’s the Ocean Protocol #dev-support channel on Discord. For updates from the Ocean team, follow us on Twitter🙂


Getting Started with ocean.py, Part 4: Walkthrough was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

Getting Started with ocean.py, Part 3: Set up Remotely

https://blog.oceanprotocol.com/getting-started-with-ocean-py-part-3-set-up-remotely-a3cfec47a3fa?source=rss----9d6d3078be3d---4

Introduction

ocean.py is a Python library to privately & securely publish, exchange, and consume data, using Ocean Protocol.

Part 1 of this series introduced ocean.py, and described how to install it. Part 2 described how to set up for local testing

This post sets us up for a remote flow following ocean.py’s setup-remote.md README.

Here’s a video version this post. Or, jump straight into the text content below.

https://medium.com/media/e76c995bf5c19f813d9e916eeefb912e/href

Remote Setup

We do setup for Mumbai, the testnet for Polygon. It’s similar for other remote chains.

We assume you’ve already installed Ocean.

Here, we will:

  1. Configure Brownie networks
  2. Create two accounts — REMOTE_TEST_PRIVATE_KEY1 and 2
  3. Get fake MATIC on Mumbai
  4. Get fake OCEAN on Mumbai
  5. Set envvars
  6. Setup in Python — Alice and Bob wallets in Python

Let’s go!

1. Configure Brownie networks (One-Time)

1.1 Network config file

Brownie’s network config file is network-config.yaml. It is located in the .brownie/ subfolder of your home folder.

  • For Linux & MacOS, it’s: ~/.brownie/network-config.yaml
  • For Windows users, it’s: C:Users<user_name>.brownienetwork-config.yaml

1.2 Generate network config file (if needed)

If you already see the config file, skip this section.

If you don’t, you need to auto-generate by calling any brownie function from a Python console. Here’s an example.

First, in a new or existing console, run Python:

python

In the Python console:

from ocean_lib.example_config import get_config_dict

It will generate the file in the target location. You can check the target location to confirm.

1.3 Contents of network config file

The network configuration file has settings for each network, e.g. development (ganache), Ethereum mainnet, Polygon, and Mumbai.

Each network gets specifications for:

  • host – the RPC URL, i.e. what URL do we pass through to talk to the chain
  • required_confs – the number of confirmations before a tx is done
  • id – e.g. polygon-main (Polygon), polygon-test (Mumbai)

development chains run locally; live chains run remotely.

The example network-config.yaml in Brownie’s GitHub repo is here. It can serve as a comparison to your local copy.

Ocean.py follows the exact id name for the network’s name from the default Brownie configuration file. Therefore, you need to ensure that your target network name matches the corresponding Brownie id.

1.4 Networks Supported

All Ocean-deployed chains (Eth mainnet, Polygon, etc) should be in Brownie’s default network-config.yaml except Energy Web Chain (EWC).

For Windows users: it’s possible that your network-config.yaml doesn’t have all the network entries. In this case, just replace your local file’s content with the network-config.yaml in Brownie’s GitHub repo, here.

For all users: to use EWC, add the following to network-config.yaml:

- name: energyweb
networks:
- chainid: 246
host: https://rpc.energyweb.org
id: energyweb
name: energyweb

1.5 RPCs and Infura

The config file’s default RPCs point to Infura, which require you to have an Infura account with corresponding token WEB3_INFURA_PROJECT_ID.

The next step depends on whether you have an Infura account.

If you do have an Infura account

  • Linux & MacOS users: in console: export WEB3_INFURA_PROJECT_ID=<your infura ID>
  • Windows users: in console: set WEB3_INFURA_PROJECT_ID=<your infura ID>

If you do not have an Infura account

One option is to get an Infura account. A simpler option is to bypass the need for an account: just change to RPCs that don’t need Infura.

You can bypass manually: just edit your brownie network config file.

Or you can bypass via the command-line. The following command replaces Infura RPCs with public ones in network-config.yaml:

  • Linux users: in the console:
sed -i 's#https://polygon-mainnet.infura.io/v3/$WEB3_INFURA_PROJECT_ID#https://polygon-rpc.com/#g; s#https://polygon-mumbai.infura.io/v3/$WEB3_INFURA_PROJECT_ID#https://rpc-mumbai.maticvigil.com#g' ~/.brownie/network-config.yaml
  • MacOS users: you can achieve the same thing with gnu-sed and the gsed command. (Or just manually edit the file.)
  • Windows users: you might need something similar to powershell. (Or just manually edit the file.)

1.6 Network config file wrapup

Congrats, you’ve now configured your Brownie network file! You rarely need to worry about it from now on.

2. Create EVM Accounts (One-Time)

An EVM account is singularly defined by its private key. Its address is a function of that key. Let’s generate two accounts!

In a new or existing console, run Python.

python

In the Python console:

from eth_account.account import Account
account1 = Account.create()
account2 = Account.create()

print(f"""
REMOTE_TEST_PRIVATE_KEY1={account1.key.hex()}, ADDRESS1={account1.address}
REMOTE_TEST_PRIVATE_KEY2={account2.key.hex()}, ADDRESS2={account2.address}
""")

Then, hit Ctrl-C to exit the Python console.

Now, you have two EVM accounts (address & private key). Save them somewhere safe, like a local file or a password manager.

These accounts will work on any EVM-based chain: production chains like Eth mainnet and Polygon, and testnets like Goerli and Mumbai. Here, we’ll use them for Mumbai.

3. Get (fake) MATIC on Mumbai

We need the a network’s native token to pay for transactions on the network. ETH is the native token for Ethereum mainnet; MATIC is the native token for Polygon, and (fake) MATIC is the native token for Mumbai.

To get free (fake) MATIC on Mumbai:

  1. Go to the faucet https://faucet.polygon.technology/. Ensure you’ve selected “Mumbai” network and “MATIC” token.
  2. Request funds for ADDRESS1
  3. Request funds for ADDRESS2

You can confirm receiving funds by going to the following url, and seeing your reported MATIC balance: https://mumbai.polygonscan.com/address/<ADDRESS1 or ADDRESS2>

4. Get (fake) OCEAN on Mumbai

OCEAN can be used as a data payment token, and locked into veOCEAN for Data Farming / curation. The READMEs show how to use OCEAN in both cases.

To get free (fake) OCEAN on Mumbai:

  1. Go to the faucet https://faucet.mumbai.oceanprotocol.com/
  2. Request funds for ADDRESS1
  3. Request funds for ADDRESS2

You can confirm receiving funds by going to the following url, and seeing your reported OCEAN balance: https://mumbai.polygonscan.com/token/0xd8992Ed72C445c35Cb4A2be468568Ed1079357c8?a=<ADDRESS1 or ADDRESS2>

5. Set envvars

As usual, Linux/MacOS needs “export" and Windows needs "set". In the console:

  • For Linux & MacOS users: in the console:
# For accounts: set private keys
export REMOTE_TEST_PRIVATE_KEY1=<your REMOTE_TEST_PRIVATE_KEY1>
export REMOTE_TEST_PRIVATE_KEY2=<your REMOTE_TEST_PRIVATE_KEY2>
  • For Windows users: in the console:
# For accounts: set private keys
set REMOTE_TEST_PRIVATE_KEY1=<your REMOTE_TEST_PRIVATE_KEY1>
set REMOTE_TEST_PRIVATE_KEY2=<your REMOTE_TEST_PRIVATE_KEY2>

6. Setup in Python

In your working console, run Python:

python

In the Python console:

# Create Ocean instance
from ocean_lib.web3_internal.utils import connect_to_network
connect_to_network("polygon-test") # mumbai is "polygon-test"

import os
from ocean_lib.example_config import get_config_dict
from ocean_lib.ocean.ocean import Ocean
config = get_config_dict("polygon-test")
ocean = Ocean(config)

# Create OCEAN object. ocean_lib knows where OCEAN is on all remote networks
OCEAN = ocean.OCEAN_token

# Create Alice's wallet
from brownie.network import accounts
accounts.clear()

alice_private_key = os.getenv('REMOTE_TEST_PRIVATE_KEY1')
alice = accounts.add(alice_private_key)
assert alice.balance() > 0, "Alice needs MATIC"
assert OCEAN.balanceOf(alice) > 0, "Alice needs OCEAN"

# Create Bob's wallet. While some flows just use Alice wallet, it's simpler to do all here.
bob_private_key = os.getenv('REMOTE_TEST_PRIVATE_KEY2')
bob = accounts.add(bob_private_key)
assert bob.balance() > 0, "Bob needs MATIC"
assert OCEAN.balanceOf(bob) > 0, "Bob needs OCEAN"

# Compact wei <> eth conversion
from ocean_lib.ocean.util import to_wei, from_wei

If you get a gas-related error like transaction underpriced, you’ll need to change the priority_fee or max_fee. See details in brownie docs.

Conclusion / Next step

You’ve now set up everything you need for testing on a remote chain, congrats! It’s similar for any remote chain.

The next step is to walk through the main flow, with this remote setup. In it, you’ll publish a data asset, post for free / for sale, dispense it / buy it, and consume it. We’ll cover that in the next post of this series. (Here’s the ocean.py README version of it.)

Have questions? Here’s the Ocean Protocol #dev-support channel on Discord. For updates from the Ocean team, follow us on Twitter🙂


Getting Started with ocean.py, Part 3: Set up Remotely was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

Data Farming DF28 Completes and DF29 Launches. DF Main is Here!

https://blog.oceanprotocol.com/data-farming-df27-completes-and-df28-launches-df-main-is-here-2b842ecba5ce

Stakers can claim DF28 rewards. DF29 runs Mar 16-Mar 23, 2023

1. Overview

Data Farming Round 29 is here (DF29).

DF29 is the first week for DF Main, the final phase of DF! This week, users can earn rewards up to 150K OCEAN. In DF Main, weekly rewards will grow to 1M+ OCEAN.

The article “Ocean Data Farming Main is Here” has the full details of DF Main. In fact, it’s a self-contained description of Ocean Data Farming (DF), including all the details that matter. It is up-to-date with the latest reward function, weekly OCEAN allocation, and estimates of APYs given the current amount of OCEAN staked.

The rest of this article is the “usual” weekly update: recap of the previous week, and details of the upcoming week.

DF is like DeFi liquidity mining or yield farming, but is tuned to drive data consume volume (DCV) in the Ocean ecosystem. It rewards stakers with OCEAN who allocate voting power to curate data assets with high DCV.

To participate, users lock OCEAN to receive veOCEAN, then allocate veOCEAN to promising data assets (data NFTs) via the DF dapp.

DF28 counting started 12:01am Mar 9, 2022 and ended 12:01am Mar 16, 2023. 75K OCEAN worth of rewards were available. Those rewards are now ready for claiming. You can claim them at the DF dapp Claim Portal.

DF29 is live and will conclude on Mar 23, 2023.

DF Round 29 (DF29) is the first week of DF Main. Details of DF Main can be found here.

The rest of this post describes how to claim rewards (section 2), and DF29 overview (section 3).

2. How To Claim Rewards

As a participant, follow these step on how to claim rewards:

  • Go to DF dapp Claim Portal
  • Connect your wallet
  • Passive and Active Rewards are distributed on Ethereum mainnet. Click “Claim”, sign the tx, and collect your rewards

Rewards accumulate over weeks so you can claim rewards at your leisure. If you claim weekly, you can re-stake your rewards for compound gains.

3. DF29 Overview

DF29 is part of DF Main, phase 1. This phase emits 150K OCEAN / week and runs for 52 weeks total. (A detailed DF Main schedule is here.)

Ocean currently supports five production networks: Ethereum Mainnet, Polygon, BSC, EWC, and Moonriver. DF applies to data on all of them.

Some key parameters:

  • Total budget is 150,000 OCEAN.
  • 50% of the budget goes to passive rewards (75,000 OCEAN) — rewarding users who hold veOCEAN (locked OCEAN)
  • 50% of the budget goes to active rewards (75,000 OCEAN) — rewarding users who allocate their veOCEAN towards productive datasets (having DCV).

Active rewards are calculated as follows:

For further details, see the “DF Reward Function Details” in DF Main Appendix.

As usual, the Ocean core team reserves the right to update the DF rewards function and parameters, based on observing behavior. Updates are always announced at the beginning of a round, if not sooner.

Conclusion

DF28 has completed. To claim rewards, go to DF dapp Claim Portal.

DF29 begins Mar 16, 2023 at 12:01am UTC. It ends Mar 23, 2023 at 12:01am UTC.

DF28 is part of DF Main. For this phase of DF Main, the rewards budget is 150K OCEAN / week.

Further Reading

The Data Farming Series post collects key articles and related resources about DF.

Follow Ocean Protocol on Twitter or Telegram to keep up to date. Chat directly with the Ocean community on Discord. Or, track Ocean progress directly on GitHub.


Data Farming DF28 Completes and DF29 Launches. DF Main is Here! was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

Ocean Data Farming Main is Here

https://blog.oceanprotocol.com/ocean-data-farming-main-is-here-49c99602419e?source=rss----9d6d3078be3d---4

The final phase of DF is live. 150K weekly OCEAN rewards, which will grow to 1M+. Earn by locking OCEAN and by Curating Data

Contents
1. Abstract
2. Introduction
3. veOCEAN Review
4. Data Farming Review
5. Walk-Through Numbers
6. On Implementing DF Main
7. Conclusion
(Plus Appendices)

1. Abstract

Data Farming (DF) incentivizes for growth of Data Consume Volume (DCV) [0] in the Ocean ecosystem. Its final phase is DF Main. DF Main emits 503.4M OCEAN worth of rewards and lasts for decades.

DF Main starts Mar 16, 2023 in DF Round 29. DF29 has 150K OCEAN rewards available (a 2x increase from DF28). As DF Main progresses, rewards will increase to 300K (another 2x), then 600K (another 2x), then beyond 1.1M OCEAN/week (near 2x) then decaying over time.

As of DF29, wash consuming will no longer be profitable. So, organically-generated Data Consume Volume will be the main driver of active DF rewards.

Typical APYs are 5–20%.

Full implementation of DF Main will be over many months, after which DF will be decentralized. DF Main lasts for decades.

2. Introduction

This article is a self-contained description of Ocean Data Farming (DF), including all the details that matter. It is up-to-date with the latest reward function, weekly OCEAN allocation, and estimates of APYs given the current amount of OCEAN staked.

The rest of this article is organized as follows.

2.1 Overview of veOCEAN & Data Farming

Lock OCEAN → get veOCEAN → earn Data Farming rewards

veOCEAN. You can lock OCEAN to get veOCEAN. The amount of OCEAN you receive when the lock ends will always be equal to the amount you locked; plus there will be rewards in the meantime. To max out rewards, you must max out lock time (4 years) and amount of OCEAN locked.

Data Farming (DF) incentivizes for growth of Data Consume Volume (DCV) in the Ocean ecosystem. DF is like DeFi liquidity mining, but tuned for DCV. DF emits OCEAN for passive rewards and active rewards.

  • As a veOCEAN holder, you get passive rewards by default.
  • If you actively curate data by allocating veOCEAN towards data assets with high Data Consume Volume (DCV), then you can earn more.

Typical APYs are 5–20%, but may be as high as 45% and as low as 1%. APYs will vary week to week. The value depends on total OCEAN staked, duration of OCEAN lock, DCV, and other factors. The “Example APYs” section has details.

2.2 DF Schedule & OCEAN Emissions

The DF schedule follows four phases:

  • DF Alpha — rounds 1–4 (4 wks). 10K OCEAN / wk
  • DF/VE Alpha — rounds 5–8 (4 wks). 10K OCEAN / wk
  • DF Beta — rounds 9–28 (20 wks)
    – DF Beta 1 — rounds 9–18 (10 wks). 50K OCEAN / wk
    – DF Beta 2 — rounds 19–28 (10 wks). 75K OCEAN / wk
  • DF Main — rounds 29+ (decades)
    – DF Main 1 — rounds 29–79 (52 wks). 150K OCEAN / wk, a 2x increase
    – DF Main 2 — rounds 80–105 (26 wks). 300K OCEAN / wk, a 2x increase
    – DF Main 3 — rounds 106–131 (26 wks). 600K OCEAN / wk, a 2x increase
    – DF Main 4 — rounds 132–1000+ (decades). >1.1M OCEAN / wk (2x increase) then decaying over time.

DF Main has 503.4M OCEAN allocated, of OCEAN’s 1.41B capped supply. The “OCEAN Emissions Schedule” section plots OCEAN / week versus time, and more details.

DF Main starts today, Mar 16, 2023.

3. veOCEAN Review

3.1 Webapp for veOCEAN

Use the veOCEAN page at the Data Farming webapp: df.oceandao.org/veocean. At it, you can lock OCEAN to get veOCEAN, see your veOCEAN balance, and more.

”veOCEAN” page at Data Farming webapp

You could follow the url directly, or go to the main Ocean site (oceanprotocol.com) and click the prominent “Earn Rewards” button in the “veOcean & Data Farming” panel.

Getting to Data Farming page from main Ocean site (oceanprotocol.com)

3.2 veOCEAN Mechanics

ve tokens have been introduced by several projects such as Curve and Balancer. These tokens require users to lock project tokens in return for ve<project tokens>.

In exchange for locking tokens, users can earn rewards. The amount of reward depends on how long the tokens are locked for. Furthermore, veTokens can be used for voting via data asset curation.

We rolled out veOCEAN to give token holders the ability to lock OCEAN to earn yield, and curate data.

People can lock their OCEAN up to 4 years to get veOCEAN. If someone locks 1,000 OCEAN, they get 1,000 OCEAN back at the end, plus rewards along the way.

veOCEAN supports passive locking of OCEAN by default. Users can get higher yield by active curation of data assets in the DF setting.

3.3 veOCEAN Core Idea

The core idea is: lock OCEAN for longer for higher rewards and more voting power. A locker can be passive, though they earn more if active.

You receive proportionally more veOCEAN for longer lock times, as follows:

  • lock 1 OCEAN for 4 years → get 1.0 veOCEAN
  • lock 1 OCEAN for 2 years → get 0.50 veOCEAN
  • lock 1 OCEAN for 1 year → get 0.25 veOCEAN
  • lock 1 OCEAN for 2 weeks → get 0.0096 veOCEAN = 2/ (4 * 52)
  • lock 1 OCEAN for 1 week → get 0.0048 veOCEAN = 1 / (4 * 52) [but you only get rewards if >1 week]

Critically, veOCEAN cannot be unlocked before the pre-set time. If you’ve locked some OCEAN for a year, you can’t unlock it during that time. The amount of OCEAN you’ll receive when the lock ends will always be equal to the amount you locked.

You can always extend your lock time or the lock amount. But, lock time can not decreased.

veOCEAN is non-transferable. You can’t send it to others.

veOCEAN held decays linearly over time. If you lock 1.0 OCEAN for four years, you get 1.0 veOCEAN at the start. After 1 year, you have 0.75 veOCEAN; after 2 years → 0.5 veOCEAN; after 3 years → 0.25 veOCEAN; after 4 years → 0.0 veOCEAN, and your OCEAN is unlocked [1].

You can top up veOCEAN anytime by updating the lock time to > current time left, via the veOCEAN page in the DF app (df.oceandao.org/veocean).

For more details on veOCEAN, see “Introducing veOCEAN.

4. Data Farming Review

4.1 Webapp for Data Farming

The Data Farming webapp is at df.oceandao.org. It allows you to perform all the core DF activities like locking OCEAN for veOCEAN, and allocating veOCEAN to data assets.

”Farms” page at Data Farming webapp

4.2 DF Overview

DF incentivizes for growth of data consume volume in the Ocean ecosystem. It rewards OCEAN to veOCEAN holders who curate towards data assets with high consume volume. DF’s aim is to achieve a minimum supply of data for network effects to kick in, and once the network flywheel is spinning, to increase growth rate.

4.3 Current DF Reward Function

The Reward Function (RF) governs how active rewards are allocated to stakers.

Rewards are calculated as follows:

See “Appendix: DF Reward Function Details” for more information.

4.4 *Future* DF Reward Function

The Reward Function (RF) has evolved over DF rounds, as chronicled in the Data Farming Series posts.

To drive DCV further, we envision further evolution of DF Rewards. Besides tuning the existing function, we can split the weekly active DF rewards budget into DF sub-streams. New sub-streams could include:

  • Dapp Data Farming. If a dapp running Ocean uses x$ of gas, pay the dapp developer 25–100% * x (denominated in OCEAN). This could be an excellent way for dapp developers to monetize [2]
  • Competition Data Farming. At some point, some of the Ocean-run competitions (e.g. Predict-ETH) can get streamlined & automated enough to put into weekly DF ops.
  • And more.

Note: this section was adapted from “Ocean Protocol Update || 2023".

4.5 On Claiming DF Rewards

How to claim? Go to the DF Webapp at df.oceandao.org/activerewards

”Rewards” page at Data Farming webapp

Where to claim? All earnings for veOCEAN holders are claimable in Ethereum mainnet. Though, data assets for DF may published in any network where Ocean’s deployed in production: Eth mainnet, Polygon, etc.

When to claim? There’s a new DF round every week; in line with this, there’s a new veOCEAN distribution “epoch” every week. This affects when you can first claim rewards. Specifically, if you lock OCEAN on day x, you’ll be able to claim rewards on the first ve epoch that begins after day x+7. Put another way, from the time you lock OCEAN, you must wait at least a week, and up to two weeks, to be able to claim rewards. (This behavior is inherited from veCRV. Here’s the code. )

4.5 Data Assets that Qualify for DF

Data assets that have veOCEAN allocated towards them get DF rewards.

The data asset may be of any type — dataset (for static URIs) or algorithm for Compute-to-Data. The data asset may be fixed price or free price. If fixed price, any token of exchange is alright (OCEAN, H2O, USDC, ..).

To qualify for DF, a data asset must also:

4.6 Active Work to Drive APY

Data Farming is not a wholly passive activity. The name of the game is to drive data consume volume (DCV). High APYs happen only when there is sufficiently high DCV. High DCV means publishing and consuming truly useful datasets (or algorithms).

Thus, if you really want to max out your APY:

  • create & publish datasets (and make $ in selling them) — or work with people who can
  • consume the datasets (to make $) — or work with people who can
  • go stake on them, and finally claim the rewards.

Driving DCV for publishing & consuming is your challenge. It will take real work. And then the reward is APY. It’s incentives all the way down:)

5. Walk-Through Numbers

This section walks through example numbers: OCEAN emissions schedule, and estimated APYs.

5.1 OCEAN Emissions Schedule

The baseline emissions schedule determines the weekly OCEAN budget for this phase. The schedule is like Bitcoin, including a half-life of 4 years. Unlike Bitcoin, there is a burn-in period to ratchet up value-at-risk versus time:

  • The curve initially gets a multiplier of 10% for 12 months (DF Main 1)
  • Then, it transitions to multiplier of 25% for 6 months (DF Main 2)
  • Then, a multiplier of 50% for 6 months (DF Main 3)
  • Finally, a multiplier of 100%. (DF Main 4)

Because they are short, we implement the first three phases as constants. We implement the fourth phase as a Bitcoin-style exponential: constant, with the constant dividing by two (“halvening”) every four years.

Let’s visualize!

Emissions — first 5 years. The image below shows the first 5 years. The y-axis is OCEAN released each week. It’s log-scaled to easily see the differences. The x-axis is time, measured in weeks. In weeks 0–29, we can see the distinct phases for DF Alpha (DF1 // week 0), DF/VE Alpha (DF5 // week 4), DF Beta (DF9 // week 8), DF Main 1 (DF29 // week 28), DF Main 2 (DF80 // week 79), DF Main 3 (DF106 // week 105), and DF Main 4 (DF132 // week 131).

OCEAN released to DF per week — first 5 years

Emissions — First 20 years. The image below is like the previous one: OCEAN released per week, but now for the first 20 years. Week 131 onwards is DF Main 4. We can see that the y-value divides by two (“halvens”) every four years.

OCEAN released to DF per week — first 20 years

Compare halvening <> smooth exponential. The image below shows the blue curve identical to the previous image: OCEAN released per week for the first 20 years. As discussed, the first three DF main phases are constants, and the final DF main phase is exponential in Bitcoin-style “halvening” approach. For comparison, the black curve shows if all four DF Main phases were implemented with a smooth exponential. It’s interesting to see how the black curve slices through the blue curve. Blue is far simpler to explain (e.g. constant phases), and lower risk to implement.

Blue: OCEAN released to DF per week — first 20 years. Black: compare to four smooth exponentials

Total OCEAN released. The image below shows the total OCEAN released by DF for the first 20 years. The y-axis is log-scaled to capture both the small initial rewards and exponentially larger values later on. The x-axis is also log-scaled so that we can more readily see how the curve converges over time.

Total OCEAN released to DF — first 20 years

5.2 Example APYs

The plot below shows estimated APY over time. Green includes both passive and active rewards; black is just passive rewards. As of DF29, wash consume is no longer profitable, so we should expect a large drop in DCV and therefore in active rewards. So passive rewards (black) provides a great baseline with upside in active rewards (green).

APYs are an estimate because APY depends on OCEAN locked. OCEAN locked for future weeks is not known precisely; it must be estimated. The plot shows the model for OCEAN locked in yellow. We model OCEAN locked by observing linear growth from week 5 (when OCEAN locking was introduced) to week 28 (now): OCEAN locked grew from 7.89M OCEAN to 34.98M OCEAN respectively, or 1.177M more OCEAN locked per week.

Green: estimated APYs (passive + active). Black: estimated APYs (just passive). Yellow: estimated staking

The plots are calculated from this Google Sheet.

OCEAN lock time affects APY. The numbers above assume that all locked OCEAN is locked for 4 years, so that 1 OCEAN → 1 veOCEAN. But APY could be much worse or more if you lock for shorter durations. Here are approximate bounds.

  • If you lock for 4 years, and everyone else locks for 2, then multiply expected APY by 2. If you lock for 4 years and others for 1, then multiply by 4.
  • Conversely, if you lock for 2 years and everyone else for 4, then divide your expected APY by 2. If you lock for 1 year and others for 4, then divide by 4.
  • The numbers assume that you’re actively allocating veOCEAN allocation towards high-DCV data assets. For passive locking or low-DCV data assets, divide APY by 2 (approximate).

6. On Implementing DF Main

Ratchet Principle. As DF main involves a huge amount of OCEAN, we take extra precautions and follow the principle “ratchet up value-at-risk over time”. What this means: rather than sending all this OCEAN directly to the vesting contracts, we “buy time” to more thoroughly verify the system

Changes for DF29. For DF29, the implementation changes can be surprisingly small: (a) increased OCEAN rewards to 150,000 (b) multiplier for DCV bounds auto-sets to 0.1%, which makes wash consume unprofitable. As usual, we deploy DF rewards with a combination of manual dispensing and Github Actions (Web2 automation).

Implementation over time. Then we can ratchet up value at risk as we deploy more components, and put OCEAN into them. Here’s the order of operations.

  • Deploy canary, wire it up. First we will deploy a “canary” vesting contract for DF Main 1. Wire it together with the rest of DF stack, including Gelato for Web3 automation. Send the contract a small amount (1000 OCEAN). It will dispense through components to DF passive & active rewards addresses. (Weekly payouts will draw from this plus the main DF payment multisig; the latter being far larger.)
  • Verification. This allows us to test the system live, in production yet with a small amount of funds at risk. At the same time, we will initiate a bug bounty program and security audit for all these components.
  • Weekly rewards keep rolling. OCEAN payouts for DF29, DF30, etc will be as scheduled for DF main. A tiny amount will come from the “canary” vesting contract, and the rest will be topped up manually by the Ocean core team.
  • Deploy the rest. Once the verification is complete, then we will deploy the remaining contracts and move the remaining OCEAN accordingly.

This Github issue has details of the plan.

Final outcome. In completing the implementation work above, OCEAN vesting will be fully automated and on-chain. Then we will tackle decentralizing DF-main reward calculations, leveraging advances in decentralized compute infrastructure. This will serve the Ocean ecosystem well in the decades that follow, with more transparency, stability, and composability.

7. Conclusion

Data Farming (DF) incentivizes for growth of Data Consume Volume (DCV) in the Ocean ecosystem. Its final phase is DF Main. DF Main emits 503.4M OCEAN worth of rewards and lasts for decades.

DF Main starts Mar 16, 2023 in DF Round 29. DF29 has 150K OCEAN rewards available (a 2x increase from DF28). As DF Main progresses, rewards will increase to 300K (another 2x), then 600K (another 2x), then beyond 1.1M OCEAN/week (near 2x) then decaying over time.

As of DF29, wash consuming will no longer be profitable. So, organically-generated Data Consume Volume will be the main driver of active DF rewards.

Typical APYs are 5–20%.

Full implementation of DF Main will be over many months, after which DF will be decentralized. DF Main lasts for decades.

Final caveat: we reserve the right to make reasonable changes to these plans, if unforeseen circumstances emerge.

Appendix: Data Farming FAQ

Q: I staked for just one day. What rewards might I expect?

At least 50 snapshots are randomly taken throughout the week. If you’ve staked just one day, and all else being equal, you should expect 1/7 the rewards compared to the full 7 days.

Q: The datatoken price may change throughout the week. What price is taken in the DCV calculation?

The price is taken at the same time as each consume. E.g. if a data asset has three consumes, where price was 1 OCEAN when the first consume happened, and the price was 10 OCEAN when the other consumes happened, then the total DCV for the asset is 1 + 10 + 10 = 21.

Q: Can the reward function change during a given week?

No. At the beginning of a new DF round (DF1, DF2, etc), rules are laid out, either implicitly if no change from previous round, or explicitly in a blog post if there are new rules. This is: reward function, bounds, etc. Then teams stake, buy data, consume, etc. And LPs are given DF rewards based on staking, DCV, etc at the end of the week. Overall cycle time is one week.

Caveat: it’s no at least in theory! Sometimes there may be tweaks if there is community consensus, or a bug.

Appendix: DF Schedule as Table

The table below cross-references DF round number (DF i), start date, phase & week, sub-phase & week, and OCEAN rewards / week.

Appendix: veOCEAN Revenue & Flow of Value

Community Fees to veOCEAN holders. Earlier, we established how veOCEAN holders earn passive and active rewards through Data Farming.

veOCEAN holders actually have a second source of revenue: community fees. Specifically: Every transaction in Ocean Market and Ocean backend generates transaction fees, some of which go to the community. 95% of the community fees will go to veOCEAN holders; the rest is used to buy back & burn OCEAN. All earnings here are passive.

Flow of Value. The image below illustrates the flow of value. On the left, at time 0, the user locks their OCEAN into the veOCEAN contract, and receives veOCEAN. In the middle, the veOCEAN holder receives OCEAN rewards every time there’s revenue to the Ocean Protocol Community (top), and also as part of DF rewards (bottom). On the right, when the lock expires (e.g. 4 years) then the user is able to move all their OCEAN around again.

Flow of value

Appendix: DF Reward Function Details

Earlier, we described the Reward Function (RF) in text. For thoroughness, here is the RF code. With the tunings, code is cleaner than math. It’s from calcrewards.py in the Ocean Protocol df-py repo.

def _calcRewardsUsd(
S: np.ndarray,
V_USD: np.ndarray,
C: np.ndarray,
DCV_multiplier: float,
OCEAN_avail: float,
do_pubrewards: bool,
do_rank: bool,
) -> np.ndarray:
"""
@arguments
S -- 2d array of [LP i, chain_nft j] -- stake for each {i,j}, in veOCEAN
V_USD -- 1d array of [chain_nft j] -- nftvol for each {j}, in USD
C -- 1d array of [chain_nft j] -- the LP i that created j. -1 if not LP
DCV_multiplier -- via calcDcvMultiplier(DF_week). Is an arg to help test.
OCEAN_avail -- amount of rewards available, in OCEAN
do_pubrewards -- 2x effective stake to publishers?
do_rank -- allocate OCEAN to assets by DCV rank, vs pro-rata
@return
R -- 2d array of [LP i, chain_nft j] -- rewards denominated in OCEAN
"""
N_i, N_j = S.shape

# corner case
if np.sum(V_USD) == 0.0:
return np.zeros((N_i, N_j), dtype=float)
# modify S's: owners get rewarded as if 2x stake on their asset
if do_pubrewards:
for j in range(N_j):
if C[j] != -1: # -1 = owner didn't stake
S[C[j], j] *= 2.0
# perc_per_j
if do_rank:
perc_per_j = _rankBasedAllocate(V_USD)
else:
perc_per_j = V_USD / np.sum(V_USD)
# compute rewards
R = np.zeros((N_i, N_j), dtype=float)
for j in range(N_j):
stake_j = sum(S[:, j])
DCV_j = V_USD[j]
if stake_j == 0.0 or DCV_j == 0.0:
continue
for i in range(N_i):
perc_at_j = perc_per_j[j]
stake_ij = S[i, j]
perc_at_ij = stake_ij / stake_j
# main formula!
R[i, j] = min(
perc_at_j * perc_at_ij * OCEAN_avail,
stake_ij * TARGET_WPY, # bound rewards by max APY
DCV_j * DCV_multiplier, # bound rewards by DCV
)
...
return R

Appendix: Contract Deployments

The veOCEAN & DF contracts are deployed to Ethereum mainnet, alongside other Ocean contract deployments. Full list.

{
“veOCEAN”: “0xE86Bf3B0D3a20444DE7c78932ACe6e5EfFE92379”,
“veAllocate”: “0x55567E038b0a50283084ae773FA433a5029822d3”,
“veDelegation”: “0xc768eDF2d21fe00ef5804A7Caa775E877e65A70E”,
“veFeeDistributor”: “0x256c54219816603BB8327F9019533B020a76e936”,
“veDelegationProxy”: “0x45E3BEc7D139Cd8Ed7FeB161F3B094688ddB0c20”,
“veFeeEstimate”: “0xe97a787420eD263583689Bd35B7Db1952A94710d”,
“SmartWalletChecker”: “0xd7ddf62257A41cc6cdAd7A3d36e4f1d925fD142a”,
“DFRewards”: “0xFe27534EA0c016634b2DaA97Ae3eF43fEe71EEB0”,
“DFStrategyV1”: “0x545138e8D76C304C916B1261B3f6c446fe4f63e3”,
}

veFeeDistributor has a start_time of 1663804800 (Thu Sep 22 2022 00:00:00)

Notes

[0] Data Consume Volume (DCV) is the USD$-denominated amount spent to purchase data assets and consume them, for a given time period (e.g. one week)

[1] veOCEAN held decays linearly over time. You can calculate the balance as follows: veOcean_balance = OCEAN_amount_locked * (your_unlock_timestamp — current_unix_timestamp ) / 60 * 60 * 24 * 7 * 52 (that is 4 years)

[2] It’s a bit like how NEAR and Canto L1s pay a % of tx fees to dapp developers.

Follow Ocean Protocol on Twitter, Telegram, or GitHub. And chat directly with the Ocean community on Discord.


Ocean Data Farming Main is Here was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

Ocean Protocol Update || 2023

https://blog.oceanprotocol.com/ocean-protocol-update-2023-44ed14510051

What We’re Doing in 2023, and Why

Ocean’s first phase was building the core infrastructure for the open data economy. It’s now entered its next phase: to drive data value-creation loops, by focusing on the users in the last mile.
Contents
1. Abstract
2. Introduction
3. Dapp Developers served by team Eagle-Rays (Stream 1)
4. Data Scientists served by team Thresher (Stream 2)
5. Crypto-Enthusiasts served by team Sailfish (Stream 3)
6. Conclusion

1. Abstract

Ocean was founded to create the building blocks and tools to unleash an open, permissionless and secure data economy. The first phase of Ocean was about building the core infrastructure for the open data economy. Ocean has entered its next phase: to drive data value-creation loops by focusing on the users in the last mile: data dapp developers, data scientists, and data-oriented crypto enthusiasts. To meet these aims, we’ve re-organized ourselves into three specific streams, one for each user type. This post shared the plans for each of the streams and their respective users.

2. Introduction

2.1. Ocean Foundations

Ocean Protocol was launched in 2017 with a whitepaper and a promise: to create the building blocks and tools to unleash an open, permissionless and secure data economy.

We shipped Ocean V1 to enable data sovereignty (“your keys, your data”). Ocean V2 enabled privacy via Compute-to-Data. We shipped Ocean V3 in 2020 with ERC20 datatokens to leverage other crypto services: where every wallet could be a data custodian, every DEX a data exchange, every DAO a data DAO. V3 met the whitepaper’s goals.

In 2022 we shipped Ocean V4, which refined V3: data NFTs to clarify intellectual property rights and help publishers and marketplaces monetize. We also shipped Data Farming (DF) and veOCEAN staking. We met all our 2022 product goals. Since then, we’ve further refined the Ocean stack towards reliability and usability.

2.2. A New Phase: Drive Value Creation Via User Focus

The five years of Ocean (2017–2022) was the first phase: building the core infrastructure for the open data economy.

During this phase, hundreds of teams tried out new ideas, 20+ strong teams built data dapps, dozens of data scientists used Ocean in data competitions and more, and hundreds of crypto enthusiasts engaged with Ocean Data Farming — amongst the 45,000 holders of OCEAN.

How do these people sustain and thrive in this burgeoning Data Economy?

Our hypothesis is — to ensure that they can make money! Specifically, create value from data, make money from that value, and loop back and reinvest this value creation into further growth. All the while, Ocean aims to be the orchestration and tooling for deeper and more complex value loops. The focus of the Ocean core team for 2023 and beyond is to realize this hypothesis. Put another way:

Ocean has entered its next phase: to drive data value-creation loops, by focusing on the users in the last mile.

The figure below illustrates.

Figure: The Data Value-Creation Loop. At the top, the user gets data by either buying it, or spending $ to create it. Then, they build an AI model from the data, then make predictions (e.g. “ETH will rise in next 10 minutes”) and choose actions (e.g. “buy ETH”). In executing these actions, they will make $ on average. The $ earned can be looped back into further activities.

In this loop, dapp builders can help their users make money; data scientists can earn directly; and crypto enthusiasts can catalyze the first two if incentivized properly (e.g. to curate valuable data).

DeFi & LLMs. We analyzed dozens of possible verticals with respect to how quickly someone goes through the data value creation loop, and how much impact Ocean can have. The most promising and mature vertical is decentralized finance (DeFi). Already in Q4/2022, the Ocean core team started to bias activities towards DeFi in our data challenges, internal research and funding programs, without constraining the Ocean community or applications elsewhere. With the release of ChatGPT and influx of attention to AI, the vertical of large language models (LLMs) and related AI tech is highly promising. Ocean’s core proposition was to be at the intersection of AI, data and blockchain. The Ocean core team is exploring ways to support the growth of AI and stay ahead of the pack.

Loops, then Scale. Once one or two data-value creation loops have been established on Ocean, where people are sustainably making money, we will help to scale those loops up by extending our technology and integrations so that developers and data engineers can do more and more over time. Our aim is to grow over the long-term, until Ocean is ubiquitous as a tool for orchestration and monetization.

2.3. 2023+ Ocean Core Team Goals & Setup

Ocean core team has set its aims accordingly: help data dapp developers, data scientists, and crypto enthusiasts to drive data value-creation loops.

To meet these aims, we’ve re-organized ourselves into three specific streams: Stream 1 (Eagle Rays) for developers, Stream 2 (Thresher) for data scientists, and Stream 3 (Sailfish) for crypto enthusiasts. The figure below illustrates.

Ocean core team’s aims: help (1) data dapp developers, (2) data scientists, and (3) crypto enthusiasts to drive data value-creation loops. The team is organized around these three aims.

2.4. Key Performance Indicators (KPIs)

Ocean core team has transitioned from building out a product specified by a whitepaper and annual roadmaps, to doing “whatever it takes” to drive data value creation. Accordingly, the measure of success changes from “was X built” to “how well are metrics for the data economy growing?”

Specifically, we track progress via these metrics:

  • The main metric is data consume volume (DCV) in USD on Ocean, i.e. how much money is spent buying & consuming data and AI assets in a given time period.

Other key performance indicators (KPIs) are:

  • # Ocean transactions,
  • # dapps deployed using Ocean,
  • # data scientists using Ocean,
  • # assets published on Ocean, and
  • # OCEAN locked.

We have set up internal measures for these metrics and will make them public when ready.

KPIs for Ocean core team. The main metric is $ data consume volume (DCV). It’s flanked by five supporting metrics.

2.5. Outline

This introduction has outlined Ocean’s first phase — creating the building blocks & tools as a foundation for an open, permissionless, secure data economy. And it introduced Ocean’s next phase: driving data value-creation loops, by focusing on the users in the last-mile.

We are now in a position to share the plans for each of the teams, and their respective users. The rest of this post is organized as follows:

  • Section 2 — DApp Developers served by Team Eagle-Rays (Stream 1)
  • Section 3 — Data Scientists served by team Thresher (Stream 2)
  • Section 4 — Crypto-Enthusiasts served by team Sailfish (Stream 3)

3. Dapp Developers served by Team Eagle-Rays (Stream 1)

[Spotted Eagle Ray. Image by John Norton. License: CC-BY]

3.1. Overview

Ocean core team has created building blocks and tools towards enabling an open, permissionless, secure data economy [1]. Dapp developers in the ecosystem can then build last-mile data dapps that use these building blocks and tools. Ocean makes it possible to quickly build & ship decentralized data marketplaces, token-gated dapps, dapps with secure non-custodial user data, and more.

While the Ocean stack has made great progress, there are still challenges. The developer experience is suboptimal and sometimes leads to projects being “lost” in the fray. The core team has only partial visibility on dapps being built, and often fails to provide appropriate support to the development teams. Finally, it can be difficult for developers to see the full potential of possible use cases when given just the building blocks & tools.

We can improve this. The priority for team Eagle-Rays is to help developers build sustainable dapps on top of Ocean Protocol with minimal friction.

The target outcomes for team Eagle Rays are:

  • A developer can build a sustainable dapp on Ocean stack, within 1 month of interaction with Ocean Protocol development team. Where “sustainable” means that the dapp creates income towards the team supporting itself with a viable business model, and ultimately thriving.
  • Ocean becomes the meeting point for data dapp builders — a place for sharing development tools & templates, for support, and for funding of new ideas.

Team Eagle-Rays will run these activities towards achieving the goals:

  1. Gather community feedback
  2. Improve dapp developer support
  3. Run a dapp dashboard to highlight projects building on Ocean
  4. Create demos to expose the capabilities of Ocean to developers
  5. Host workshops & hackathons to onboard developers into Ocean
  6. Provide referrals & funding for promising teams

The following subsections have details.

3.2. Gather Community Feedback

Our first goal is to gain a better understanding of the current state of affairs.

In the coming weeks we will have one-on-one conversations with the 20+ teams building data dapps on Ocean. We will understand the challenges they face and how we can serve them better in terms of tools and the technical support.

From this, we’ll compile a list of learnings, hypotheses and follow-up actions.

3.3. Improve Developer Support

Rather than being merely reactive — responding to requests when something is not working — we aim to be proactive in supporting the developers building dapps on top of Ocean Protocol. To assist the teams from the moment they begin their journey, advising on the design, through to final implementation, and feedback after the dapp goes live.

To better allocate internal resources towards high-potential projects, we will define tiers of guidance & technical support. Allocation is based on the level of engagement the dapp team has with Ocean Protocol. It may be as much as a dedicated core team member being allocated to that dapp team.

We will consolidate existing technical support channels and revamp the support procedures. We will refresh the product documentation, to speed up onboarding of developers.

3.4. Run a Dapp Dashboard

To further raise the prominence of dapps on Ocean, we will launch & run a new Dapp Dashboard, linked from oceanprotocol.com.

Every month, the dashboard will highlight one or two teams building on Ocean, to spread the word and help raise awareness for the teams.

3.5. Create Demos & Scaffolding

Having built the building blocks and tools of the Ocean stack, Ocean core developers intimately know the potential of Ocean to unlock data sharing and monetization.

To accelerate broader understanding and education, core developers will build a portfolio of demos and scaffolding (templates), to showcase the capabilities of Ocean, show what is possible and inspire more developers to build their ideas on Ocean. We aim to release one demo per month, showcasing various business scenarios or use cases where Ocean can be used.

3.6. Host Workshops & Hackathons

To engage more with our developer community, by organizing bi-weekly and monthly technical workshops where we’ll take deep dives into selected technical aspects of our platform.

We will run quarterly hackathons, with the target goals:

  • Rapid prototyping: To quickly test new ideas for sustainable data dapps.
  • Community building: To bring people together in the Ocean community and together explore what a new data economy can look like.
  • Innovation & idea generation: To generate new use cases for within the new data economy using Ocean
  • Skill development: To help dapp developers learn new skills, via a project that pushes them outside their comfort zone.

3.7. Provide Referrals and Funding

Once a dapp team is building on Ocean, the Ocean core team offers a range of non-technical support to help ease launching a viable dapp. This includes referrals to vetted legal, financial and organizational partners.

It also includes potential funding support, as follows:

  • Ocean Shipyard — funding high quality teams to test out their ideas
  • Ocean Ventures — investing between $25,000-$100,000 into Pre-Seed and Seed rounds for dapp teams
  • Ocean Ecosystem Fund — a $10 million pool of capital pledged by Cypher Capital and Faculty Fund to invest in dapp teams

4. Data Scientists served by team Thresher (Stream 2)

[Thresher Shark. Image by Raven Malta. License: CC-BY-SA]

4.1. Overview

Team Thresher’s aim is to help data scientists make $ from their data and algorithms on Ocean.

Let’s unpack. Thresher’s target users are data scientists. They use Python extensively. They’re quite familiar with key AI/ML tools like numpy, and scikit-learn. They love algorithms. They are Web3-curious, though not necessarily Web3 experts. They loathe devops.

The Ocean value proposition for them is: with Ocean, data scientists can sell compelling data feeds powered by an algorithm of their design, while the algorithm stays private, with zero devops.

Accordingly, Team Thresher has these top-level goals:

  • Make it easy for data scientists to create compelling data feeds and to sell them via Ocean
  • Grow the community of data scientists creating & selling compelling data feeds.

Team Thresher will run these activities towards achieving the goals:

  1. Run predict-ETH & other objective data challenges, which guide data scientists to sell compelling data feeds
  2. Run subjective data challenges, which drive insight, community, and partnerships
  3. Ship open, compelling data feeds, which data scientists can fork and ship as their own
  4. Reduce friction in data scientists’ (Python) flows. Especially, make ocean.py “just work”
  5. Other tools to demonstrate capabilities and drive traction

Each subsection below elaborates on these points.

4.2. Run Predict-ETH & Other Objective Data Challenges

Team Thresher uses Data Challenges to help walk data scientists down the garden path to monetize. They start by entering competitions, yet end up shipping compelling data feeds that make $$. Here’s how.

We use Data Challenges to help walk guide data scientists towards making money with their algorithms. They start by entering competitions, and end up shipping compelling data feeds that make $$.

Data challenges are competitions that invite participants to solve data-related problems, with prize money attached. They provide a platform to practice solving data problems, to collaborate, and to make $. They have become popular in recent years, being hosted by many companies and research institutions.

In a typical data challenge, a data scientist learns about the challenge, works on it to devise an algorithm, and submits an entry to the competition.

We take this further: the data scientist also submits a data feed powered by the data scientist’s algorithm, where the data feed is live and for sale on Ocean Market. If this algorithm is compelling enough, then people will buy it, and the data scientist will make $.

Predict-ETH. The prime example is the monthly Predict-ETH challenge. This challenge has competitors submit predictions for the price of ETH 1, 2, …, 12 hours into the future. If the data scientist can create accurate predictions, there’s $ to be made from trading. This is compelling data. In upcoming rounds of Predict-ETH, competitors will also be guided to submit C2D data feeds that generate ETH predictions upon request (powered by Ocean C2D stack). These feeds are directly monetizable. If the predictions are good, others will buy them.

Now, the data scientist is making $ by winning the competition, AND making even more $$ by selling the feeds of their algorithms.

When they’re making $, they’re sticking around to make more $, all using the Ocean stack.

This scales. As data scientists make $ and stick around, they share with their friends, leading to organic growth. So an initial handful of data scientists can grow to tens, then hundreds, then thousands. All creating compelling data feeds; all kickstarting an open, permissionless, secure data economy.

4.3. Run Subjective Data Challenges

The strategy of the previous section is best suited to data challenges with an objective quantitative metric like normalized mean-squared error (nmse) like in Predict-ETH.

There are also subjective data challenges — where judges assess qualitative metrics. We are running these too. Here’s why.

Benefits to data scientists:

  • Tools to demonstrate capabilities and drive adoption: helps data scientists to promote their work. This can include dashboards, visualizations, and other features that help users give insights into data.
  • A means for organizations to partner with Ocean core team and share their data. The partner gets the benefits of people analyzing their data, and learning more about the Ocean stack. E.g. Government of Catalonia
  • Providing more broadly accessible competitions. Subjective challenges appeal to a different demographic of data scientists — ones that prefer the “insight” side of data science over stark objective values

Benefits to Ocean core team:

  • Qualitative feedback on Ocean stack
  • Learn more about data scientist tools and workflows
  • Learn more about a new vertical, problem space, or subproblem. Without having to go directly to an objective data challenge.

4.4. Ship Compelling Data Feeds

The best way for competitors to create compelling data feeds is… to fork other compelling data feeds! So, Team Thresher is working to create compelling data feeds that are also open (and therefore forkable). To start, we’re doing this for predictions of ETH. We’ll do this for other objective competitions in the future as well.

Q: if Ocean can make such valuable algorithms, why would it make them open?

A: Ocean aims to kickstart a whole data economy, which means we want many participants shipping compelling data feeds. To catalyze this, we give them a starting point, where they can make $ with little extra effort!

4.5. Reduce Friction in Data Scientist Flows

As described, Thresher’s target users are data scientists, who use Python extensively.

We want to make it as low-friction for them to use Ocean in their Python-based flows. Especially in the flows around monetizing their algorithms, as discussed above.

To reduce friction in their flows, this includes: make ocean.py “just work”. ocean.py is a Python library on pypi. It’s the main interface for Python users to get Ocean capabilities, such as publishing a file or algorithm, creating datatokens (access tokens), and sharing or selling those. We’ve discovered that when people use it, they often run into “gotchas” in OSes or dependencies on Ocean Aquarius / Provider.

This also includes: make ocean.py C2D flow “just work” (C2D = Compute-to-Data). C2D has many moving parts. Can we simplify its UX and make it more reliable?

Furthermore, this includes: making ocean.py more accessible in a broader set of Python workflows. E.g. have it as part of the default Anaconda distribution.

4.6. Other Tools to Demo Capabilities and Drive Traction

We see many opportunities of tools to help potential users better understand Ocean capabilities, try out Ocean, and ultimately drive adoption. Especially in DeFi and LLMs. We keep this open-ended here for now.

5. Crypto-Enthusiasts served by team Sailfish (Stream 3)

[Sailfish. Image by Rodrigo Friscione. License: Public Domain]

5.1. Overview

Last year, the Ocean core team launched Data Farming and veOCEAN. veOCEAN aligns near-term with long-term perspectives in the burgeoning data economy. By Q4, Data Farming was driving up to 1.5M $USD in Data Consume Volume (DCV) per week with up-to 75K $OCEAN in rewards being distributed to data farmers.

Every participant who contributes value to this data economy should benefit from it. This includes work by staking & curating, building dapps that integrate veOCEAN & Data Farming, or otherwise.

Team Sailfish builds on this promising start. Our mission is to help crypto enthusiasts — especially web3-native Ocean participants — to earn in the Ocean ecosystem. These participants do work to reap economic rewards, and the system harnesses this work to further drive Ocean DCV via dapps and data scientists. For example, staking on data assets helps data curation in Ocean Market.

Besides driving DCV, another KPI we focus on is to increase the # participants in Data Farming and their APYs, by specifically targeting (a) OCEAN holders that have not yet engaged with veOCEAN (b) veOCEAN holders whose APYs can be improved, and (c ) Publishers who can benefit from boosting.

Towards these goals, Team Sailfish will run these activities:

  1. Launch & Complete DF Main
  2. Refine DF Rewards
  3. Serve DeFi dapp Builders Better
  4. Other KPI-Driving Activities

The following subsections have details.

5.2. Launch & Complete DF Main

DF has several phases: DF Alpha, DF/VE Alpha, DF Beta, and DF Main. DF Main is soon: it starts in DF Round 29 (Mar 26, 2023).

DF Main is the final “production” phase where the largest amounts of OCEAN are dispensed. It vests OCEAN with a 4-year half-life (like Bitcoin) into rewards for Data Farming.

Given the large amounts of OCEAN, we follow the time-tested approach of ratcheting up value-at-risk over time. Specifically, we’ll first deploy a “canary” vesting contract that will hold just a small amount of OCEAN, and manually top up weekly DF rewards. Once the system has stabilized and contracts are verified (via bug bounty & security audit) then we will deploy the remaining amount.

In completing DF Main, OCEAN vesting & related will be fully automated and on-chain. This will serve the Ocean ecosystem well in the decades that follow, with more transparency, stability, and composability.

5.3. Refine DF Rewards

Improvements to the Reward Function (RF) create new pull-mechanisms that help drive DCV and other metrics. An example is the introduction of Data Farming and how much Data Consume Volume it drove.

Every week DF emits OCEAN rewards, according to a pre-set schedule. We want to maximize the bang-for-the buck of these emissions in driving traction.

Accordingly, we spend effort to refine DF rewards. Recent changes include:

  • We recently introduced publisher rewards — giving publishers 2x effective stake — to catalyze more data being published into Ocean.
  • For DF29, wash consume will become unprofitable. Until then, it’s profitable because users receive more rewards than the 0.1% fee they pay.

To drive DCV further, we envision more refinements of DF Rewards. Besides tuning the existing function, we can split the weekly active DF rewards budget into sub-streams. New sub-streams could include:

  • Dapp Data Farming. If a dapp running Ocean uses x$ of gas, pay the dapp developer 25–100% * x (denominated in OCEAN). This could be an excellent way for dapp developers to monetize [2]
  • Competition Data Farming. At some point, some of the Stream 2 data competitions can get streamlined & automated enough to put into weekly DF ops.
  • Other publisher schemes. Eg high-touch schemes to onboard publishers and ensure they get paid. Maybe less automated, but could be vertical helpful in the next 6–18 months.

5.4. Serve DeFi Dapp Builders Better

We will know that we have succeeded when we can observe many products, features, and case studies showing organic demand and validate end-to-end value flows on top of the protocol.

In 2022, it was important for us to build the systems that get us from 0 → 1 with veOCEAN and DF. So, our focus was largely on core software.

For 2023, our focus is core software and customer success — success of (a) OCEAN holders and of (b) DeFi-oriented dapp builders. Previous sections focused on (a) OCEAN holders.

We want to serve (b) DeFi-oriented dapp builders better. Specifically, builders that are building out the DeFi side of OCEAN, veOCEAN, and Data Farming.

A prime example is H2O’s introduction of psdnOCEAN, for liquid staking of OCEAN. psdnOCEAN provides improved UX, higher APY and more liquidity than default veOCEAN. We view this as a success story.

We ask: what other DeFi-oriented dapp builders can we engage with, and serve better? Can OCEAN or veOCEAN integrate with DeFi loan protocols like Aave? Can we guide teams to build fractionalized Data NFTs, revenue-backed loans, or other?

5.5. Other KPI-Driving Activities

We aim to improve these KPIs:

  • # OCEAN holders participating in veOCEAN
  • APY for veOCEAN and Data Farming participants
  • # Publishers with multiple quality data sets
  • Publisher volume and APY

We have several hypotheses of how to drive these KPIs. We don’t know which will work. We will test ideas, learn, and iterate. Therefore, this section is open-ended. Let’s see where it takes us!

6. Conclusion

Ocean was founded to create the building blocks and tools to unleash an open, permissionless and secure data economy. The first phase of Ocean was about building the core infrastructure for the open data economy. Ocean has entered its next phase: to drive data value-creation loops, by focusing on the users in the last mile: data dapp developers, data scientists, and data-oriented crypto enthusiasts.

To meet these aims, we’ve re-organized ourselves into three specific streams: Stream 1 (Eagle Rays) for developers, Stream 2 (Thresher) for data scientists, and Stream 3 (Sailfish) for crypto enthusiasts.

This post shared the plans for each of the streams, and their respective users.

7. Notes

[1] Besides building block primitives and tools, the Ocean core team may also build with “baseline apps” like Ocean Market (decentralized data market)

[2] It’s a bit like how NEAR and Canto L1s pay a % of tx fees to dapp developers.

Follow Ocean Protocol on Twitter or Telegram to keep up to date. Chat directly with the Ocean community on Discord. Or, track Ocean progress directly on GitHub.


Ocean Protocol Update || 2023 was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

Getting Started with ocean.py, Part 2: Set up Locally

https://blog.oceanprotocol.com/getting-started-with-ocean-py-part-2-set-up-locally-e232dc8d8182

Introduction

ocean.py is a Python library to privately & securely publish, exchange, and consume data, using Ocean Protocol.

Part 1 of this series introduced ocean.py, and described how to install it.

In this post, we pick up from from part 1, to do setup for local testing. A later post will describe how to do setup for remote networks.

We’ll closely follow the ocean.py setup-local.md README.

Note 1: if you have issues with any steps in this README, please share it in o Ocean’s #dev-support Discord, and we will work with you to resolve them.

Note 2: this blog post is up-to-date as of Mar 2023, but if 2+ months have passed, we recommend using the READMEs directly for up-to-date instructions.

Let’s get going!

1. Download barge and run services

Ocean barge runs ganache (local blockchain), Provider (data service), and Aquarius (metadata cache).

Barge helps you quickly become familiar with Ocean, because the local blockchain has low latency and no transaction fees. Accordingly, many READMEs use it. However, if you plan to only use Ocean with remote services, you don’t need barge.

In a new console:

# Grab repo
git clone https://github.com/oceanprotocol/barge
cd barge

# Clean up old containers (to be sure)
docker system prune -a --volumes

# Run barge: start Ganache, Provider, Aquarius; deploy contracts; update ~/.ocean
./start_ocean.sh

Now that we have barge running, we can mostly ignore its console while it runs.

2. Brownie local network configuration

(You don’t need to do anything in this step, it’s just useful to understand.)

Brownie’s network configuration file is at ~/.brownie/network-config.yaml.

When running locally, Brownie will use the chain listed under development, having id development. This refers to Ganache, which is running in Barge.

3. Set envvars

From here on, go to a console different than Barge. (E.g. the console where you installed Ocean, or a new one.)

First, ensure that you’re in the working directory, with venv activated:

cd my_project
source venv/bin/activate

Then, set keys in READMEs. If you’re a Linux or MacOS user, you’ll use “export"; if Windows, you’ll use "set". Let’s make it explicit. In the same console:

Linux & MacOS users:

# keys for alice and bob in readmes
export TEST_PRIVATE_KEY1=0x8467415bb2ba7c91084d932276214b11a3dd9bdb2930fefa194b666dd8020b99
export TEST_PRIVATE_KEY2=0x1d751ded5a32226054cd2e71261039b65afb9ee1c746d055dd699b1150a5befc

# key for minting fake OCEAN
export FACTORY_DEPLOYER_PRIVATE_KEY=0xc594c6e5def4bab63ac29eed19a134c130388f74f019bc74b8f4389df2837a58

Windows users:

# keys for alice and bob in readmes
set TEST_PRIVATE_KEY1=0x8467415bb2ba7c91084d932276214b11a3dd9bdb2930fefa194b666dd8020b99
set TEST_PRIVATE_KEY2=0x1d751ded5a32226054cd2e71261039b65afb9ee1c746d055dd699b1150a5befc

# key for minting fake OCEAN
set FACTORY_DEPLOYER_PRIVATE_KEY=0xc594c6e5def4bab63ac29eed19a134c130388f74f019bc74b8f4389df2837a58

4. Setup in Python

In the same console, run Python console:

python

In the Python console:

# Create Ocean instance
from ocean_lib.web3_internal.utils import connect_to_network
connect_to_network("development")

from ocean_lib.example_config import get_config_dict
config = get_config_dict("development")

from ocean_lib.ocean.ocean import Ocean
ocean = Ocean(config)

# Create OCEAN object. Barge auto-created OCEAN, and ocean instance knows
OCEAN = ocean.OCEAN_token

# Mint fake OCEAN to Alice & Bob
from ocean_lib.ocean.mint_fake_ocean import mint_fake_OCEAN
mint_fake_OCEAN(config)

# Create Alice's wallet
import os
from brownie.network import accounts
accounts.clear()

alice_private_key = os.getenv("TEST_PRIVATE_KEY1")
alice = accounts.add(alice_private_key)
assert alice.balance() > 0, "Alice needs ETH"
assert OCEAN.balanceOf(alice) > 0, "Alice needs OCEAN"

# Create Bob's wallet. While some flows just use Alice wallet, it's simpler to do all here.
bob_private_key = os.getenv('TEST_PRIVATE_KEY2')
bob = accounts.add(bob_private_key)
assert bob.balance() > 0, "Bob needs ETH"
assert OCEAN.balanceOf(bob) > 0, "Bob needs OCEAN"

# Compact wei <> eth conversion
from ocean_lib.ocean.util import to_wei, from_wei

Because you’ve set up for local, you’ll be doing all these steps on the local network.

Conclusion / Next step

You’ve now set up everything you need for local testing, congrats!

The next step — the fun one — is to walk through the main flow. In it, you’ll publish a data asset, post for free / for sale, dispense it / buy it, and consume it.

We’ll cover that in the next post of this series. (Here’s the ocean.py README version of it.)

Have questions? Here’s the Ocean Protocol #dev-support channel on Discord. For updates from the Ocean team, follow us on Twitter🙂


Getting Started with ocean.py, Part 2: Set up Locally was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

Getting Started with ocean.py, Part 1: Introduction & Installation

https://blog.oceanprotocol.com/getting-started-with-ocean-py-part-1-introduction-installation-530295c3d3b4?source=rss----9d6d3078be3d---4

Introduction

Are you a data scientist that’s curious about Web3 / blockchain, and don’t know where to begin? Have you invented some cool new AI algorithm and want to monetize it? Are you creating data to train an LLM and want to specify the data’s licensing terms? Or perhaps you want to sell data without losing privacy or control?

Good news, this is what ocean.py is all about! It’s a Python library to privately & securely publish, exchange, and consume data. It leverages Ocean Protocol to get the benefits above.

This blog post is the first of a series. The series walks you through how to install, set up, and then use ocean.py for the benefits above (and beyond!).

This post describes ocean.py capabilities, a outline of the quickstart steps, then goes right into step 1 of the quickstart — installing.

Note: if you prefer, here’s this post as two videos: (a) Intro (b) Installation.

ocean.py Capabilities

ocean.py lets you do the following things.

  • Publish data services: downloadable files or compute-to-data. Create an ERC721 data NFT for each service, and ERC20 datatoken for access (1.0 datatokens to access).
  • Sell datatokens via for a fixed price. Sell data NFTs.
  • Transfer data NFTs & datatokens to another owner, and all other ERC721 & ERC20 actions using web3.py or Brownie.

We think this is pretty cool! Alas, it’s somewhat in crypto-speak. If you don’t know some of the words or acronyms, don’t fret! As you start playing with ocean.py, it will become more clear.

Video: Outline

Here’s the above content as a video by @graceful-coder 🙂

https://medium.com/media/08fa52a9c63eb1e8f79ddaafa3e6b45a/href

ocean.py Quickstart

To get going quickly with ocean.py, its main README outlines several steps. Steps 1–3 are local, then steps 4–6 are on remote flows. Don’t follow these links just yet! We’ll walk you through in this & future articles).

  1. Install Ocean
  2. Set up locally
  3. Walk through main flow (local setup): publish asset, post for free / for sale, dispense it / buy it, and consume it.
  4. Now for remote flows! First, set up remotely
  5. Walk through main flow (remote setup). Like step 3, but on wholly remote network
  6. Walk through C2D flow — tokenize & monetize AI algorithm via Compute-to-Data

After these quickstart steps, the main README points to several other use cases, such as predict-eth, Data Farming, on-chain key-value stores (public or private), and other types of data assets (REST API, GraphQL, on-chain).

Installation

Now, let’s cover step 1 from above— installation of ocean.py. Future blog posts will cover the other steps (1–5) and other use cases.

Happy Path

ocean.py is a Python library on pypi as ocean-lib. So as you might expect, you can install it with:

pip install ocean-lib

BUT! It’s not always that simple. For starters, you’ll probably want to install it in a virtual environment. And there can be gotchas in installing some libraries that ocean.py uses.

Because of this, ocean.py has a dedicated README for installation, called install.md. It helps you handle these details. Let’s go through it!

(Note: this blog post is up-to-date as of Mar 2023, but if 2+ months have passed, we recommend using the READMEs like install.md directly for up-to-date instructions. )

Prerequisites

ocean.py is designed to work with Linux, MacOS, or Windows. If you have one of those, you’re in luck. If you’re still on DOS… we don’t know what to say!

ocean.py local setup uses Docker containers to hold local blockchain nodes (Ganache), and Ocean middleware (Aquarius metadata cache, Provider to help consume data assets). Therefore you need to have the following Docker components set up:

ocean.py works on modern versions of Python: from version 3.8.5 to Python 3.10.4. And also Python 3.11, though some manual alterations are needed for it (details below).

Install ocean.py library

ocean.py requires some basic system dependencies which are standard to development images. If you encounter trouble during installation, please make sure you have autoconf, pkg-config and build-essential or their equivalents installed.

In a new console:

# Create your working directory
mkdir my_project
cd my_project

# Initialize virtual environment and activate it. Install artifacts.
# Make sure your Python version inside the venv is >=3.8.
# Anaconda is not fully supported for now, please use venv
python3 -m venv venv
source venv/bin/activate

# Avoid errors for the step that follows
pip install wheel

# Install Ocean library.
pip install ocean-lib

OK! We’ve done the non simple version of pip-install ocean-lib, to handle virtual environment and gotchas that wheel can solve.

Alas, there may be other issues too! Here are some, with workarounds.

Potential issues & workarounds

Issue: M1 * coincurve or cryptography

  • If you have an Apple M1 processor, coincurve and cryptography installation may fail due missing packages, which come pre-packaged in other operating systems.
  • Workaround: ensure you have autoconf, automake and libtool installed, e.g. using Homebrew or MacPorts.

Issue: MacOS “Unsupported Architecture”

  • If you run MacOS, you may encounter an “Unsupported Architecture” issue.
  • Workaround: install including ARCHFLAGS: ARCHFLAGS="-arch x86_64" pip install ocean-lib. Details.

To install ocean-lib using Python 3.11, run pip install vyper==0.3.7 –ignore-requires-python and sudo apt-get install python3.11-dev before installing ocean-lib. Since the parsimonious dependency does not support Python 3.11, you need to edit the parsimonious/expressions.py to import getfullargspec as getargsspec instead of the regular import. These are temporary fixes until all dependencies are fully supported in Python 3.11. We do not directly use Vyper in ocean-lib.

ocean.py uses Brownie

To be more precise, Ocean ❤️ Brownie!

When you installed Ocean (ocean-lib pypi package) above, it included installation of Brownie (eth-brownie package).

ocean.py uses Brownie to connect with deployed smart contracts.

Thanks to Brownie, ocean.py treats each Ocean smart contract as a Python class, and each deployed smart contract as a Python object. We love this feature, because it means Python programmers can treat Solidity code as Python code! 🤯

Video: Installation

Here’s the above content as a video by @graceful-coder 🙂

https://medium.com/media/79ec17d3eaccc51eee9c0cb9f4ad067d/href

Conclusion / Next Step

With the steps above, you’ve now installed Ocean, great!

The next post in this series walks you through how to set up locally. (Here’s the ocean.py README version of it.)

Have questions? Here’s the Ocean Protocol #dev-support channel on Discord. For updates from the Ocean team, follow us on Twitter🙂


Getting Started with ocean.py, Part 1: Introduction & Installation was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

Getting Started with ocean.py, Part 1: Introduction & Installation

https://blog.oceanprotocol.com/getting-started-with-ocean-py-part-1-introduction-installation-530295c3d3b4

Introduction

Are you a data scientist that’s curious about Web3 / blockchain, and don’t know where to begin? Have you invented some cool new AI algorithm and want to monetize it? Are you creating data to train an LLM and want to specify the data’s licensing terms? Or perhaps you want to sell data without losing privacy or control?

Good news, this is what ocean.py is all about! It’s a Python library to privately & securely publish, exchange, and consume data. It leverages Ocean Protocol to get the benefits above.

This blog post is the first of a series. The series walks you through how to install, set up, and then use ocean.py for the benefits above (and beyond!).

This post describes ocean.py capabilities, a outline of the quickstart steps, then goes right into step 1 of the quickstart — installing.

Note: if you prefer, here’s this post as two videos: (a) Intro (b) Installation.

ocean.py Capabilities

ocean.py lets you do the following things.

  • Publish data services: downloadable files or compute-to-data. Create an ERC721 data NFT for each service, and ERC20 datatoken for access (1.0 datatokens to access).
  • Sell datatokens via for a fixed price. Sell data NFTs.
  • Transfer data NFTs & datatokens to another owner, and all other ERC721 & ERC20 actions using web3.py or Brownie.

We think this is pretty cool! Alas, it’s somewhat in crypto-speak. If you don’t know some of the words or acronyms, don’t fret! As you start playing with ocean.py, it will become more clear.

Video: Outline

Here’s the above content as a video by @graceful-coder 🙂

https://medium.com/media/08fa52a9c63eb1e8f79ddaafa3e6b45a/href

ocean.py Quickstart

To get going quickly with ocean.py, its main README outlines several steps. Steps 1–3 are local, then steps 4–6 are on remote flows. Don’t follow these links just yet! We’ll walk you through in this & future articles).

  1. Install Ocean
  2. Set up locally
  3. Walk through main flow (local setup): publish asset, post for free / for sale, dispense it / buy it, and consume it.
  4. Now for remote flows! First, set up remotely
  5. Walk through main flow (remote setup). Like step 3, but on wholly remote network
  6. Walk through C2D flow — tokenize & monetize AI algorithm via Compute-to-Data

After these quickstart steps, the main README points to several other use cases, such as predict-eth, Data Farming, on-chain key-value stores (public or private), and other types of data assets (REST API, GraphQL, on-chain).

Installation

Now, let’s cover step 1 from above— installation of ocean.py. Future blog posts will cover the other steps (1–5) and other use cases.

Happy Path

ocean.py is a Python library on pypi as ocean-lib. So as you might expect, you can install it with:

pip install ocean-lib

BUT! It’s not always that simple. For starters, you’ll probably want to install it in a virtual environment. And there can be gotchas in installing some libraries that ocean.py uses.

Because of this, ocean.py has a dedicated README for installation, called install.md. It helps you handle these details. Let’s go through it!

(Note: this blog post is up-to-date as of Mar 2023, but if 2+ months have passed, we recommend using the READMEs like install.md directly for up-to-date instructions. )

Prerequisites

ocean.py is designed to work with Linux, MacOS, or Windows. If you have one of those, you’re in luck. If you’re still on DOS… we don’t know what to say!

ocean.py local setup uses Docker containers to hold local blockchain nodes (Ganache), and Ocean middleware (Aquarius metadata cache, Provider to help consume data assets). Therefore you need to have the following Docker components set up:

ocean.py works on modern versions of Python: from version 3.8.5 to Python 3.10.4. And also Python 3.11, though some manual alterations are needed for it (details below).

Install ocean.py library

ocean.py requires some basic system dependencies which are standard to development images. If you encounter trouble during installation, please make sure you have autoconf, pkg-config and build-essential or their equivalents installed.

In a new console:

# Create your working directory
mkdir my_project
cd my_project

# Initialize virtual environment and activate it. Install artifacts.
# Make sure your Python version inside the venv is >=3.8.
# Anaconda is not fully supported for now, please use venv
python3 -m venv venv
source venv/bin/activate

# Avoid errors for the step that follows
pip install wheel

# Install Ocean library.
pip install ocean-lib

OK! We’ve done the non simple version of pip-install ocean-lib, to handle virtual environment and gotchas that wheel can solve.

Alas, there may be other issues too! Here are some, with workarounds.

Potential issues & workarounds

Issue: M1 * coincurve or cryptography

  • If you have an Apple M1 processor, coincurve and cryptography installation may fail due missing packages, which come pre-packaged in other operating systems.
  • Workaround: ensure you have autoconf, automake and libtool installed, e.g. using Homebrew or MacPorts.

Issue: MacOS “Unsupported Architecture”

  • If you run MacOS, you may encounter an “Unsupported Architecture” issue.
  • Workaround: install including ARCHFLAGS: ARCHFLAGS="-arch x86_64" pip install ocean-lib. Details.

To install ocean-lib using Python 3.11, run pip install vyper==0.3.7 –ignore-requires-python and sudo apt-get install python3.11-dev before installing ocean-lib. Since the parsimonious dependency does not support Python 3.11, you need to edit the parsimonious/expressions.py to import getfullargspec as getargsspec instead of the regular import. These are temporary fixes until all dependencies are fully supported in Python 3.11. We do not directly use Vyper in ocean-lib.

ocean.py uses Brownie

To be more precise, Ocean ❤️ Brownie!

When you installed Ocean (ocean-lib pypi package) above, it included installation of Brownie (eth-brownie package).

ocean.py uses Brownie to connect with deployed smart contracts.

Thanks to Brownie, ocean.py treats each Ocean smart contract as a Python class, and each deployed smart contract as a Python object. We love this feature, because it means Python programmers can treat Solidity code as Python code! 🤯

Video: Installation

Here’s the above content as a video by @graceful-coder 🙂

https://medium.com/media/79ec17d3eaccc51eee9c0cb9f4ad067d/href

Conclusion / Next Step

With the steps above, you’ve now installed Ocean, great!

The next post in this series walks you through how to set up locally. (Here’s the ocean.py README version of it.)

Have questions? Here’s the Ocean Protocol #dev-support channel on Discord. For updates from the Ocean team, follow us on Twitter🙂


Getting Started with ocean.py, Part 1: Introduction & Installation was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.