CharmVerse X Ceramic: Empowering User Data Ownership in the Blockchain Era

https://blog.ceramic.network/charmverse-ceramic-case-study/

CharmVerse, a pioneering web3 community engagement and onboarding platform, recently integrated ComposeDB on Ceramic to store user attestations for grants and rewards. CharmVerse’s decision to build on Ceramic was driven by the need to store credentials in a user-owned, decentralized manner, without relying on traditional databases.

A who’s who of well-known web3 projects leverage CharmVerse to help manage their community and grants programs. Optimism, Game7, Mantle, Safe, Green Pill, Purple DAO, Orange DAO, Taiko (and the list goes on) have all experienced the need for a unique, web3-centric platform to interact with and empower their ecosystems.

What Objectives Does the Integration Address?

The work of vetting developer teams and distributing grants demands a significant investment of time and focus to ensure responsible treasury deployment. This need-driven use case is a wonderful fit for Ceramic’s capabilities.

CharmVerse identified an opportunity to enhance grants/community managers’ capabilities by implementing a credentialing and rewards system that supports user-sovereign data. This system allows grants managers to better understand their applicants, scale the number of teams they can work with, and issue attestations representing skills and participation in grants and other community programs, creating a verifiable record of participation. However, this solution came with technical challenges in maintaining user data privacy and ownership while ensuring decentralization as this data represents significant insight into the historical activity and capabilities of individuals and teams.

Why did CharmVerse Choose Ceramic?

CharmVerse considered various options but ultimately chose Ceramic due to its unique capability to support decentralized credentials and store attestations in a way that aligned with CharmVerse’s vision. Alex Poon, CEO & co-founder of CharmVerse, shared:

“Ceramic’s unique approach to data decentralization has been a game changer for us, allowing us to truly empower our users while respecting their privacy, allowing users the choice to keep their data private or publish it on-chain. This integration aligns perfectly with CharmVerse’s success metrics, centering on community empowerment and data sovereignty.”

How did CharmVerse Integrate Ceramic?

CharmVerse’s integration utilizes Ceramic’s ability to store user attestations and leverages Ceramic’s work with the Ethereum Attestation Service (EAS) as the underlying model for supporting decentralized credentials. The integration was not only a technical milestone for CharmVerse but also achieved the strategic goal of appealing to an audience concerned with data privacy and ownership.

More specifically, CharmVerse issues off-chain signed attestations in recognition of important grant program milestones (designed to award these credentials both when users create proposals, and when their grants are accepted). Given Ceramic’s open-access design, we expect to see other teams utilize these credentials issued by CharmVerse as a strong indication of applicant reputation, track record, and ability to deliver.

How to See CharmVerse in Action

This collaboration illustrates the power of innovative solutions in advancing blockchain usability, value, and adoption while maintaining the values of the early cypherpunk vision of decentralization. If you would like to check out this integration and use the tool to manage your own community programs, visit app.charmverse.io and follow the CharmVerse X account for more updates!

WalletConnect Tutorial: Create User Sessions with Web3Modal

https://blog.ceramic.network/walletconnect-tutorial/

WalletConnect offers Web3 developers powerful tools to make building secure, interactive, and delightful decentralized applications easier. This tooling incorporates best-in-class UX and UI with a modular approach to a suite of SDKs and APIs. For many teams looking to accelerate their development cadence without sacrificing security or quality, WalletConnect’s various SDKs are an obvious choice.

One of our favorites is Web3Modal – a toolset that provides an intuitive interface for dApps to authenticate users and request actions such as signing transactions. Web3Modal supports multiple browser wallets (such as MetaMask and Trust Wallet) and offers thorough instruction in their documentation to help developers get up and running across multiple frameworks (React, Next, Vue, etc). For this tutorial, we will show how to use WalletConnect’s Web3Modal for user authentication and the creation of user sessions.

Ready? Awesome! Let’s get started

What Will We Build?

For this tutorial, we will build an application to track event attendance. The use case here is somewhat basic – imagine a conference that wants to keep track of which participants went to which event. They might allow participants to scan a QR code that takes them to this application where they can sign in (with their wallet), optionally opt into sharing their location, and generate a badge showing that they attended.

Here’s a simple visual of the user flow:

WalletConnect Tutorial: Create User Sessions with Web3Modal

Based on the summary above, it might be obvious where Web3Modal fits in. That’s right – we will be using this SDK to authenticate users and keep track of who attended what event based on their wallet address.

We’ve made up two imaginary events to align with this use case:

  1. Encryption Event
  2. Wallet Event

Below is a sneak peek at our app’s UI:

WalletConnect Tutorial: Create User Sessions with Web3Modal

What’s Included in Our Technical Stack?

To power this simple application, we will need a few things:

  1. A frontend framework that runs in the attendee’s browser and a backend to handle any internal API calls we’ll need – we will use NextJS
  2. Wallet tooling so we don’t have to build authentication logic from scratch – Web3Modal
  3. React hooks that work with our browser wallet so we don’t have to build these either – we’ll use Wagmi
  4. Decentralized data storage – we’ll use ComposeDB (graph database built on Ceramic)

Why ComposeDB?

If dealing with potentially thousands (or more) attendees to these imaginary events (as is often the case with large conferences), storing these records on-chain would be both costly and inefficient. Each record would incur gas fees, and querying the blockchain across tens of thousands of records would be arduous.

Nonetheless, we want our application to give data control to the users who attend the events. And, in our imaginary use case, other conferences must have access to this data (not just our application) so they can determine who should receive admission priority. We will therefore require some sort of decentralized data network.

In Ceramic (which is what ComposeDB is built on), user data is organized into verifiable event streams that are controlled exclusively by the user account that created each stream. Since Ceramic is a permissionless open data network, any application can easily join and access preexisting user data (which meets one of the requirements listed above).

Applications that build on Ceramic/ComposeDB authenticate users (using sign-in with Ethereum), creating tightly-scoped permission for the application to write data to the network on the user’s behalf. This is important for us because our application’s server will need to cryptographically sign the badge (to prove the badge was indeed generated through our application) before saving the output in Ceramic on the user’s behalf.

Finally, ComposeDB adds a graph database interface on top of Ceramic, making it easy to query, filter, order, and more (using GraphQL) across high document volumes – an ideal fit for any teams who want to consume these badges and perform computation over them in an efficient manner.

We will go into more detail throughout this tutorial.

Getting Started

We have set up a special repository for you to help guide you through – keep in mind that we will need to add to it using the below steps for it to work.

Start by cloning the demo application repository and install your dependencies:

git clone https://github.com/ceramicstudio/walletconnect-tutorial
cd walletconnect-tutorial
npm install

Go ahead and open the directory in your code editor of choice. If you take a look at your package.json file, you’ll see our@web3modal/wagmi and wagmi packages mentioned above, as well as several @ceramicnetwork and @composedb packages to meet our storage needs.

Obtain a WalletConnect Project ID

While your dependencies are downloading, you can create a WalletConnect project ID (which we’ll need to configure our Web3Modal – more information on their docs). You can do so for free by visiting their WalletConnect Cloud site, creating a new project (with the “App” type selected), and a name of your choosing:

WalletConnect Tutorial: Create User Sessions with Web3Modal

After you click “Create” you will be directed to the settings page for the project you just set up. Go ahead and copy the alphanumeric value you see next to “Project ID.”

WalletConnect Tutorial: Create User Sessions with Web3Modal

Back in your text editor, navigate to your /src/pages/_app.tsx file and enter the ID you just copied into the blank field next to the projectId constant. Notice how we use this ID and a mainnet chain setting when defining our wagmiConfig (later used to create our Web3Modal). Just as the Web3Modal docs instructed, we are setting up these functions outside our React components, and wrapping all child components with our WagmiConfig wrapper:

const projectId = ''
const chains = [mainnet]
const wagmiConfig = defaultWagmiConfig({ chains, projectId })
createWeb3Modal({ wagmiConfig, projectId, chains })

const MyApp = ({ Component, pageProps }: AppProps) => {
  return (
    
    
      
      
    
  );
}
export default MyApp

We can now make our Web3Modal button accessible to child components of our application to allow our users to sign in. If you take a look at /src/components/nav.tsx, you’ll see that we placed our component directly into our navigation to allow users to sign in/out on any page of our application (at the moment our application only has 1 page).

Notice how we make use of the size and balance properties – these are two of several settings developers can use to further customize the modal’s appearance. These two in particular are fairly simple to understand – one alters the size of the button, while the other hides the user’s balance when the user is authenticated.

Finally, you probably noticed in your /src/pages/_app.tsx file that we’re also utilizing a context wrapper. This is what we will explain next.

Create a ComposeDB Configuration

Now that we’ve created our Wagmi configuration, we will need to set up our ComposeDB data storage. There are several steps involved (all of which have been taken care of for you). These include:

  1. Designing the data models our application will need
  2. Creating a local node/server configuration for this demo (in production)
  3. Deploying our data models onto our node
  4. Defining the logic our application will use to read from + write to our ComposeDB node

Data Models

If you take a look at your /composites folder, you’ll see an /attendance.graphql file where we’ve already defined the models our application will use. In ComposeDB, data models are GraphQL schema that contain the requirements for a single piece of data (a social post, for example), in addition to its relations to other models and accounts. Since Ceramic is an open data network, developers can build on preexisting data models (you can explore tools like S3 to observe existing schema), or define brand new ones for your app.

In our case, our application will leverage a general event interface that our two event types will implement:

interface GeneralAttendance
  @createModel(description: "An interface to query general attendance") {
  controller: DID! @documentAccount
  recipient: String! @string(minLength: 42, maxLength: 42)
  latitude: Float
  longitude: Float
  timestamp: DateTime!
  jwt: String! @string(maxLength: 100000)
}
type EncryptionEvent implements GeneralAttendance
  @createModel(accountRelation: SINGLE, description: "An encryption event attendance") {
  controller: DID! @documentAccount
  recipient: String! @string(minLength: 42, maxLength: 42)
  latitude: Float
  longitude: Float
  timestamp: DateTime!
  jwt: String! @string(maxLength: 100000)
}
type WalletEvent implements GeneralAttendance
  @createModel(accountRelation: SINGLE, description: "A wallet event attendance") {
  controller: DID! @documentAccount
  recipient: String! @string(minLength: 42, maxLength: 42)
  latitude: Float
  longitude: Float
  timestamp: DateTime!
  jwt: String! @string(maxLength: 100000)
}

Notice how we’ve set the accountRelation field for both types to “SINGLE” – what this means is that 1 user can only ever have 1 model instance of that type, thus creating a 1:1 account relationship. This is contrasted with “LIST” accountRelation which would indicate a 1:many relationship.

You’ll also notice that our latitude and longitude fields do not use a ! next to their scalar definition – what this means is that they are optional, so a model instance can be created with or without these fields defined.

Finally, we will use our jwt field to record the signed badge payload our server will create for the user. Since the user will ultimately be in control of their data, a potentially deceptive could try to change the values of their model instance outside the confines of our application. Seeing as our architecture requires a way for both our application and other conferences to read and verify this data, the jwt field will create tamper-evident proof against the values by tying the cryptographic signature of our application’s DID together with the data.

Create a Local Server Configuration

Seeing as this is just a demo application and we don’t have a cloud-hosted node endpoint to access, we will define a server configuration to run locally on our computer. While there are multiple server settings an application can leverage, the key items to know for this demo are the following:

  • Our app will run inmemory whereas a production application will use mainnet for their network setting
  • Our server will define sqlite as our SQL index, whereas a production application would use PostgreSQL
  • Our IPFS will run in bundled mode (ideal for early prototyping), whereas a production application will run in remote

Finally, each Ceramic node is configured with an admin DID used to authenticate with the node and perform tasks like deploying models. This is different from the DIDs end users will use when authenticating themselves using their wallet and writing data to the network.

Fortunately for you, we’ve taken care of this for you by creating a command. Simply run the following in your terminal once your dependencies are installed:

npm run generate

If you take a look at your admin_seed.txt file you will see the admin seed your Ceramic node will use. Your composedb.config.json file is where you’ll find the server configuration you just created.

Deploying the Models onto Our Node

Seeing as we’re not using a preexisting node endpoint that’s already set up to index the data models we care about, we’ll need a way to deploy our definitions onto our node. If you look at /scripts/composites.mjs you’ll find a writeComposite method we’ve created for you that reads from our GraphQL file, creates an encoded runtime definition and deploys the composite onto our local node running on port 7007.

The important thing to take note of here is how the writeEncodedCompositeRuntime method generates a definition in our definition.js file. We will explain in the next step how this is used by our client-side library to allow our application to interact with these data models and our Ceramic node.

Don’t take any action yet – we will explain how to use this script in the coming steps.

Integrating ComposeDB with Our Application

Finally, as mentioned above, we will need a way for our application to read from and write to our ComposeDB node. We will also need a way to combine our Web3Modal authentication logic with the need to authenticate users onto our node.

If you take a look at /src/fragments/index.tsx you’ll find a ComposeDB component that allows us to utilize React’s createContext hook and create a wrapper of our own. Since we know Web3Modal will make use of our wallet client, we can leverage the wallet client to request a Ceramic user session authentication from our user.

Observe the following:

const CERAMIC_URL = process.env.URL ?? "http://localhost:7007";
/**
 * Configure ceramic Client & create context.
 */
const ceramic = new CeramicClient(CERAMIC_URL);
const compose = new ComposeClient({
  ceramic,
  definition: definition as RuntimeCompositeDefinition,
});
let isAuthenticated = false;
const Context = createContext({ compose, isAuthenticated });
export const ComposeDB = ({ children }: ComposeDBProps) => {
  function StartAuth() {
    const { data: walletClient } = useWalletClient();
    const [isAuth, setAuth] = useState(false);
    useEffect(() => {
      async function authenticate(
        walletClient: GetWalletClientResult | undefined,
      ) {
        if (walletClient) {
          const accountId = await getAccountId(
            walletClient,
            walletClient.account.address,
          );
          const authMethod = await EthereumWebAuth.getAuthMethod(
            walletClient,
            accountId,
          );
          const session = await DIDSession.get(accountId, authMethod, {
            resources: compose.resources,
          });
          await ceramic.setDID(session.did as unknown as DID);
          console.log("Auth'd:", session.did.parent);
          localStorage.setItem("did", session.did.parent);
          setAuth(true);
        }
      }
      void authenticate(walletClient);
    }, [walletClient]);
    return isAuth;
  }
  if (!isAuthenticated) {
    isAuthenticated = StartAuth();
  }
  return (
    
      {children}
    
  );
};

Notice how we’re using the wallet client’s account address to initiate a DID session that asks for specific resources from compose. If you track deeper, you’ll see that compose was instantiated using the definition imported from the file our deployment script wrote into. This allows us to access a limited scope to write data on the user’s behalf specifically for the data models our application uses (these sessions auto-expire after 24 hours).

Finally, to bring this full circle, back to our /src/pages/_app.tsx file, you should now understand how we’re able to use ComposeDB as a contextual wrapper, enabling us to access both the ComposeDB client libraries and our model definitions from within any child component. For example, if you take a look at /src/components/index.tsx you’ll see how we’re now able to utilize our useComposeDB hook that allows us to run queries against our node’s client.

Create a Seed for our Application’s Server DID

We mentioned above that we’ll want our application to sign each badge payload before handing document control back to the end user. While this flow will not always be the case (read this blog on common data control patterns in Ceramic for more), we’ll want to implement this to ensure the verifiability of the data.

In /src/pages/api/create.ts we’ve created an API our application’s server will expose that does exactly this – it intakes the data relevant to the event, uses a SECRET_KEY environment variable to instantiate a static DID, and returns a Base64 string-encoded JSON web signature containing the signed data.

We will therefore need to create a separate static seed to store in a .env file that we’ll create:

touch .env

For this tutorial, enter the following key-value pair into your new file:

SECRET_KEY="11b574d316903ced6cc3f4787bbcc3047d9c72d1da4d83e36fe714ef785d10c1"

When you use the above seed to instantiate a DID, this will yield the following predictable did:

did:key:z6MkqusKQfvJm7CPiSRkPsGkdrVhTy8EVcQ65uB5H2wWzMMQ

If you look back into /src/components/index.tsx you’ll see how our lengthy getParams the method performs a quick check against any existing EncryptionEvent or WalletEvent badges the user already holds to test whether the jwt value was indeed signed by our application (a more thorough version of this could include verifying that the signed data matches the same values from the other fields, but we’ll leave that up to you to add).

That’s it! We are finally ready to run our application!

Running the Application

Now that we’ve set up everything we need for our app to run locally, we can start it up in developer mode. Be sure to select the correct node version first:

nvm use 20
npm run dev

Once you see the following in your terminal, your application is ready to view in your browser:

WalletConnect Tutorial: Create User Sessions with Web3Modal

In your browser, navigate to http://localhost:3000 – you should see the following:

WalletConnect Tutorial: Create User Sessions with Web3Modal

Signing in with Web3Modal

As mentioned above, we’ve made our Web3Modal accessible from our navigation which is where our “Connect Wallet” button is coming from. Go ahead and give this button a click and select your wallet of choice.

During the sign-in cadence, you will notice an additional authorization message appear in your wallet that looks something like this:

WalletConnect Tutorial: Create User Sessions with Web3Modal

If you recall what we covered in the “Integrating ComposeDB with Our Application” section above, you’ll remember that we discussed how we created a DIDSession by requesting authorization over the specific resources (data models) our application will be using. These are the 3 items listed under the “Resources” section of the sign-in request you should see.

Finally, after you’ve signed in, your Web3Modal will now show a truncated version of your address:

WalletConnect Tutorial: Create User Sessions with Web3Modal

Creating Badges

As you can see, our application does not allow the user to input which event they have attended – this will be determined based on the URL the QR code sends the user with the following format:

http://localhost:3000/?event={event id}

Take a look at your browser console – you should see logs that look similar to this:

WalletConnect Tutorial: Create User Sessions with Web3Modal

We’ve preset these logs for you by reading from our runtime composite definition that we’ve imported into the /src/components/index.tsx component. Go ahead and copy one of those fields and construct your URL to look something like this:

http://localhost:3000/?event=kjzl6hvfrbw6c8njv24a3g4e3w2jsm5dojwpayf4pobuasbpvskv21vwztal9l2

If you’ve copied the stream ID corresponding to the EncryptionEvent model, your UI should now look something like this:

WalletConnect Tutorial: Create User Sessions with Web3Modal

You can optionally select to share your coordinates. Finally, go ahead and create a badge for whichever event you entered into your URL:

WalletConnect Tutorial: Create User Sessions with Web3Modal

If you navigate back to your /src/components/index.tsx file you can observe what’s happening in createBadge. After calling our /api/create route (which uses our application server’s static DID to sign the event data), we’re performing a mutation query that creates an instance of whichever event aligns with the identifier you used in your URL parameter. Since our user is the account currently authenticated on our node (from the creation of our DID session), the resulting document is placed into the control of the end user (with our tamper-evident signed data entered into the jwt field).

If you take a look at our getParams method in our /src/components/index.tsx file, you’ll notice that we’ve created a query against our ComposeDB node that runs both within our useEffect React hook as well as after every badge creation event. Notice how we’re querying based on the user’s did:pkh: did:pkh:eip155:${chainId}:${address?.toLowerCase()}

If you take a look at our chainId and address assignments, you’ll realize these are coming from our Wagmi hooks we mentioned we’d need (specifically useAccount and useChainId).

What’s Next?

We hope you’ve enjoyed this fairly straightforward walk-through of how to use WalletConnect’s Web3Modal toolkit for authenticating users, creating user sessions in Ceramic, and querying ComposeDB based on the authenticated user! While that’s all for this tutorial, we encourage you to explore the other possibilities and journies Ceramic has to offer. Below are a few we’d recommend:

Test Queries on a Live Node in the ComposeDB Sandbox

Build an AI Chatbot on ComposeDB

Create EAS Attestations + Ceramic Storage

Finally, we’d love for you to join our community:

Join the Ceramic Discord

Follow Ceramic on Twitter

Data Control Patterns in Decentralized Storage

https://blog.ceramic.network/data-control-patterns-in-decentralized-storage/

The Data Provenance article outlined several differentiating features while comparing Ceramic to smart contract platforms and centralized databases. When observed through the lens of the provenance and lineage of data, many challenges and design choices are inherently unique to a peer-to-peer, multi-directional replication layer that enables multi-master, eventually consistent databases to be built on top. As the article referenced above points out, the paradigm of true “user-controlled data” does not exist in traditional database terms, since user-generated content is authored by dedicated servers on the users’ behalf.

More recently, our team has been discussing and working on additional features to ComposeDB that feel different compared to those aimed to provide functional parity with what developers should expect from a database layer. Whereas the ability to filter and order based on schema subfields feels like a basic utility expectation, our more recent discussions around introducing features that allow developers to define single-write schemas, for example, call on use cases unique to decentralized data.

These feature ideas, and more importantly the problem statements behind them, impact certain patterns of data control more than others. These different architectural patterns are what we’ll attempt to unpack below by tying together their design, specific needs, how teams are currently building with them, and what role they play in the broader dataverse.

Users vs. Apps: Data Creation and Ownership

We’ve broken down these predominant patterns in the table below:

Data Control Patterns in Decentralized Storage

Before we dive in, it’s important to point out that the way Ceramic “accounts” (represented by DIDs) interact with the network is itself a novel paradigm in data storage, and more akin to blockchain accounts transacting with a smart contract network. This is an important key to the underlying architecture that allows for the diversity of “data control” outlined in the table above. This article assumes readers are already familiar with this setup, at least on a basic level.

Finally, our position on these patterns is of the perspective that all are necessary and important to a diverse data ecosystem. As we’ll outline below, these emergent behaviors arise from the real needs of the applications that use them and their users. We hope that this article will help developers understand why these different models exist, what trade-offs and considerations they incur, and use that information to approach their data design in a more informed way.

With that out of the way, let’s dive deeper into each of these patterns.

Application-Generated, Application-Controlled

When I think of an archetypal use case for this quadrant, the idea of a “data oracle” comes to mind. That oracle is responsible for:

  1. Bridging the “gap” between two disparate networks by reporting data
  2. Reliably providing said data in an opinionated, predictable, and steadily frequent way

Looking at it through this lens, the “application-generated” thought transforms into more of an “application-provided” idea. Though the data itself originates elsewhere, the role of the oracle is to retrieve it, transform it into whichever format is needed by the storage layer, and write it to its final destination.

There are commonalities in behavior across teams creating and using this data quadrant:

Use of DID:Key

  • Unsurprisingly, an application dedicated to writing reporting data to Ceramic frequently will be using a static seed (or set of static seeds) to authenticate DID:Key(s) on the Ceramic node

Server-Side Generation, Server-Side Writes

  • Given that the method used to write data to Ceramic doesn’t require authentication with a browser wallet, this action can be easily created as a server-side service
  • Similarly, another server service would be responsible for retrieving data from the data source and transforming it before supplying it to the service that writes it to Ceramic

Application as the Trusted Vector

  • Other teams relying on data provided by an oracle need to trust that the service won’t have intermittent failures

Data Availability Motives

  • As far as composability goes, the application writing the data, along with other applications building on that data, are incentivized to sync the data to ensure persistence

Unique Needs

  • Beyond basic sorting and filtering needed by any applications building on this data type, not much is needed
  • Consumers of this data (which is likely the application that controls it) trust the logic that was used to write the data to the network

Ceramic Example

Application-Generated, User-Controlled

As outlined in the table, a great example that falls into this category would be credential data. In the analog world, universities issue diplomas to their students. The diploma is typically signed by the university’s chancellor or president, but the physical document is transferred and kept by the student (I understand that this is not a perfect analogy in the sense that it’s typically the university that verifies a student has received a diploma by employers or other interested parties).

In a broadly similar sense, this category manifests in Ceramic as tamper-evident data that are cryptographically signed by the application but point to the end user that controls its storage. Perhaps the data is encrypted by the user before it’s stored in Ceramic, with access control conditions to grant only whitelisted parties viewing abilities. Due to its tamper-evident qualities, all parties can rest assured of the fidelity of the data’s contents.

Here are some commonalities:

Use of DID:PKH

  • The resulting Ceramic documents are almost always written by the user with an authenticated session created by their blockchain wallet

Server-Side Generation, Client-Side Writes

  • In the case of credentials, the application is likely using a static key to generate payloads server-side (once certain application conditions are met) and passes the payload to the user to write with the authenticated session stored in the browser

Application as the Trusted Vector

  • What matters most for consumers of this data is the application that attested to it and signed it
  • While the user has technical control over the data and who they share it with (if they encrypt it), they cannot change the data’s values

Data Availability Motives

  • We can assume that some of this data will be highly valuable—perhaps the application that issues the credential is highly respected and unlocks a lot of gates for the end user
  • The user that controls these documents will likely be the most compelled party to keep this data available. It will be important for other applications that consume this data, but arguably not as much

Tamper-Evident, Cryptographically Signed

Unique Needs

  • Under the use case example referenced above, applications that issue data for their users to store typically don’t reference other data (for example, a relation to another Ceramic document) that can easily be changed—each data object is inherently whole
  • Therefore, I’d argue no unique needs are obvious or necessary for this quadrant either

Ceramic Example

  • Gitcoin Passport issues Verifiable Credentials to its users while giving the users Ceramic ownership of their instances

User-Generated, Application-Controlled

This unique category feels most similar to what we’re used to in Web2, though the use cases in Ceramic might be somewhat novel. For example, if you think back to the tamper-evident qualities of an application-generated credential that’s stored by the user we referenced in the last section, the inverse could be necessary for some applications. Perhaps an application wants to create a petition that’s cryptographically signed by multiple users, but ownership of the final document that aggregates the individual signatures should be in the hands of the facilitator.

Perhaps the broader use case is positioned around collaboration, allowing individuals to input into the document with the resulting document representing that unified effort.

In terms of commonalities:

Use of DID:Key

  • The application will most likely be authoring Ceramic documents using a static seed or set of seeds

Client-Side Input, Server-Side Transformation, Server-Side Writes

  • Under this design, it’s likely that the application is receiving input from user browser sessions but is set up such that once a condition has been met, it will transform the data into the format necessary to write it to Ceramic

Application Server May or May Not Require Trust

  • If the inputs from the users are in a format that’s tamper-evident, end users interacting with this setup simply need to trust that the server writes the data to Ceramic, but do not need to worry about data fidelity

Data Availability Motives

  • If the application’s primary value proposal is its ability to facilitate coordination across users or transform data in creative ways, the controlling application is most motivated to keep the data available (alongside other applications consuming it)

Unique Needs

  • Feature needs for this quadrant should be highly similar to the “application-generated, application-controlled” category, in the sense that the application can trust the static logic of its servers not to intentionally change and ruin its data

User-Generated, User-Controlled

We anticipate this final category to eventually be the most common across applications built on Ceramic, and much of the momentum we’ve witnessed thus far aligns with that narrative. More importantly, while each of these categories is valuable to the ecosystem, this one speaks most to the narrative around data interoperability, making the concept of users interacting with the same data primitives they own across disparate environments and applications possible.

A great showcase example would be a social application framework that allows users to create posts, comment on posts, and react to content, all tied together by a parent “context” representing an application environment. Other social applications can read from and build off of the same content artifacts, or define a unique context that uses the same data primitives but swaps out the connective tissue.

Under this model, users have sovereign control of their documents. Even if a user creates a session key that allows a malicious application to write incorrect data on their behalf, the session key will ultimately expire (or is manually revoked by the user), the application loses its write privileges, the Ceramic document never transfers ownership, and the user overwrites the data. End users are equally free to spin up custom UIs and nodes and edit their data that will render in the interfaces of the applications that defined those models.

However, high flexibility brings important considerations:

Use of DID:PKH

  • This shouldn’t be surprising since most users will be creating authenticated sessions from their browser wallet

Client-Side Generation, Client-Side Writes

  • This also shouldn’t be controversial given that the inputs are coming from end users, and their sessions are authenticated to write the data to Ceramic

Network Data Sync

  • This is where more complexity comes into play. For the data users generate to propagate and become usable between applications, their underlying Ceramic nodes must sync and share information
  • While a multi-node architecture may be required for a single application (in which case the data sync is still important as far as performance goes), in this context I’m referring to data sharing across nodes operated by separate applications
  • To reduce or eliminate the necessity of trust between application environments, the underlying protocol must have built-in features to ensure data fidelity. Team A should not have to rely on the competency of Team B, and shouldn’t suffer if Team B somehow messes up

Data Availability Motives

  • Interest in keeping the data available should be fairly shared between the end users and the applications that use it

Ceramic Example

  • Orbis is a social data model that provides intuitive tooling for developers who want to build social timelines, comment systems, private messaging, and more

Unique Needs

  • I think this is where we wade into territory that’s best illustrated by hypothetical situations:

Let’s say a trust system for open-source software plugins is built on Ceramic. Individual plugins are represented by ComposeDB models. For example:

type Plugin
    @createModel(accountRelation: LIST, description: "A simple plugin")
    @createIndex(fields: [{ path: "name" }])
    @createIndex(fields: [{ path: "created" }])
    @createIndex(fields: [{ path: "checkSum" }]) 
{
    owner: DID! @documentAccount 
    name: String! @string(maxLength: 100)
    description: String @string(maxLength: 100)
    created: DateTime!
    checkSum: String! @string (minLength: 64, maxLength: 64)
}

Let’s assume that the checkSum field in this example represents the SHA-256 hash of the plugin’s code.

For people to trust and use the plugins, they rely on representations of user-generated trust:

type Plugin @loadModel(id: "...") {
    id: ID!
}
type Trust
    @createModel(accountRelation: LIST, description: "A trust model")
    @createIndex(fields: [{ path: "trusted" }])
{
    attester: DID! @documentAccount 
    trusted: Boolean! 
    reason: String @string(maxLength: 1000)
    pluginId: StreamID! @documentReference(model: "Plugin")
    plugin: Plugin! @relationDocument(property: "pluginId")
}

Under this current setup, you can easily imagine the following problematic situations:

  1. Plugin A receives a ton of authentic, well-intentioned trust signals over a few weeks. The controller of the Plugin A stream then decides to switch out the value of the checkSum field to refer to a malicious plugin that steals your money. Because the relationship is defined by the plugin’s stream ID, those positive trust signals will by default point to the latest commit in the event log (the malicious plugin)
  2. Plugin A receives a ton of deceptive positive trust signals, all from Account A. Because (at the time of writing this article) there’s nothing that prevents Account A from doing so, User Interface A and User Interface B are left to do some custom client-side work to sort through the noise and present the truth to the end user. Since there are multiple ways of doing so, mistakes are made by User Interface B, and the end user is tricked yet again

You can start to see the real ways these issues simply don’t exist elsewhere under the safeguard of an application’s server preventing adverse or unexpected behavior.

Solutions Actively Underway

Bringing this back to Ceramic’s current roadmap, we have two active feature integrations we’re working on to address these unique problems, each of which is actively in request-for-comment stages:

Native Support for Single-Write Documents (field locking)

This active RFC (request for comment) details a feature that would allow developers to define schemas that prevent certain fields within a model from being updated after the document is created. Using the Plugin example above, you can imagine how this might be useful in preventing a Plugin controller from maliciously updating key fields like checkSum to prevent users from being tricked.

A refactor of the Plugin schema definition using this feature might instead look like:

type Plugin
    @createModel(accountRelation: LIST, description: "A simple plugin")
    @createIndex(fields: [{ path: "name" }])
    @createIndex(fields: [{ path: "created" }])
    @createIndex(fields: [{ path: "checkSum" }]) 
{
    owner: DID! @documentAccount 
    name: String! @locking @string(maxLength: 100)
    description: String @string(maxLength: 100)
    created: DateTime! @locking
    checkSum: String! @locking @string (minLength: 64, maxLength: 64)
}

While actual syntax may vary, the @locking directive here is meant to identify the subfields that would be prevented from being updated after the document is created (most importantly, the checkSum).

Native Support for Unique-List Documents (”SET” accountRelation)

This RFC would help prevent situation #2 above from happening. While ComposeDB would currently allow Account A from creating as many Trust instances for Plugin A as they wanted (thus leaving it up to the competency and inference of the applications or interfaces using that data to resolve), this instead would make the following possible:

type Plugin @loadModel(id: "...") {
    id: ID!
}
type Trust
    @createModel(accountRelation: SET, accountRelationField: "pluginId")
    @createIndex(fields: [{ path: "trusted" }])
{
    attester: DID! @documentAccount 
    trusted: Boolean! 
    reason: String @string(maxLength: 1000)
    pluginId: StreamID! @documentReference(model: "Plugin")
    plugin: Plugin! @relationDocument(property: "pluginId")
}

Under this design, users can create as many instances of Trust as they want, but prevents them from creating more than 1 for any 1 plugin, given that the constraining field is on the pluginId field.

Get Involved

For you readers who find this problem space compelling, or want to get involved in ways that help shape the future of ComposeDB, we encourage you to react in the forum to the RFCs we mentioned above. Here are those links again:

Native Support for Single-Write Documents (field locking)

Native Support for Unique-List Documents (”SET” accountRelation)

Do you see additional gaps between our current functionality and the needs of the four design quadrants we outlined above? Let us know! Create an RFC of your own on the forum using the same format in the two above.

An Easy Way to Create Verifiable Claims on Ceramic

https://blog.ceramic.network/an-easier-way-to-create-verifiable-claims-on-ceramic/

We’ve discussed in our Attestations vs. Credentials blog and Verifiable Credentials guide how to make two claim standards “work together” on ComposeDB by improving querying efficiency and composability as a result of integrating interfaces. Taking a step back, one of the core benefits these formats unlock for developers is the tamper-evident verification assurance that the resulting signature payloads provide. More broadly, developers (and by extension the users of their applications) who want to use verifiable claims in their applications value claim portability.

Why does claim portability matter?

Unlike a public blockchain that reveals the behavior of a user address simply by observing its transaction history, claims produced in ‘off-chain’ environments require different qualities to be inherently ‘provable’. At the same time, many developers agree that for certain types of data, or certain application environments, a fully on-chain architecture is both cost-inefficient and non-performant. As a result, some of the systems we’re seeing built on Ceramic today use standards like W3C Verifiable Credentials or EAS Offchain Attestations to provide portability assurances.

Each document written to Ceramic in the form of a Verifiable Credential or off-chain EAS Attestation payload contains everything needed to later recall the document from Ceramic and validate the signatures. The document can therefore be considered “whole” and simply requires minor reconstruction (to create a Verifiable Presentation in the case of a VC, for example).

So, what’s the issue?

In the Attestations vs. Credentials post, we used a simple VC definition that is simply meant to illustrate a trust credential issued from one account that points to another. The credential subject therefore only contained two fields:

  1. isTrusted : a boolean field that represents whether the party is trusted or not
  2. id : the DID of the recipient account

Despite the brevity of this credential type, the payload of one example EIP712 instance was quite verbose:

{
    "issuer": "did:pkh:eip155:1:0x06801184306b5eb8162497b8093395c1dfd2e8d8",
    "@context": [
        "https://www.w3.org/2018/credentials/v1",
        "https://beta.api.schemas.serto.id/v1/public/trusted-reviewer/1.0/ld-context.json"
    ],
    "type": [
        "VerifiableCredential",
        "Trusted"
    ],
    "credentialSchema": {
        "id": "https://beta.api.schemas.serto.id/v1/public/trusted/1.0/json-schema.json",
        "type": "JsonSchemaValidator2018"
    },
    "credentialSubject": {
        "isTrusted": true,
        "id": "did:pkh:eip155:1:0xcc2158d7e1b0fffd4db6f51e35f05e00d8fe30b2"
    },
    "issuanceDate": "2023-12-05T21:03:03.061Z",
    "proof": {
        "verificationMethod": "did:pkh:eip155:1:0x06801184306b5eb8162497b8093395c1dfd2e8d8",
        "created": "2023-12-05T21:03:03.061Z",
        "proofPurpose": "assertionMethod",
        "type": "EthereumEip712Signature2021",
        "proofValue": "0x47fadf4bab9c0d111b6bf304eb2c72e6419c636f7b117761ce5cf4926a79074e073e2560b90d78230deac06a7afc705813f3f403fa51967e2da0e7783d4dae0d1b",
        "eip712": {
            "domain": {
                "chainId": 1,
                "name": "VerifiableCredential",
                "version": "1"
            },
            "types": {
                "EIP712Domain": [
                    {
                        "name": "name",
                        "type": "string"
                    },
                    {
                        "name": "version",
                        "type": "string"
                    },
                    {
                        "name": "chainId",
                        "type": "uint256"
                    }
                ],
                "CredentialSchema": [
                    {
                        "name": "id",
                        "type": "string"
                    },
                    {
                        "name": "type",
                        "type": "string"
                    }
                ],
                "CredentialSubject": [
                    {
                        "name": "id",
                        "type": "string"
                    },
                    {
                        "name": "isTrusted",
                        "type": "bool"
                    }
                ],
                "Proof": [
                    {
                        "name": "created",
                        "type": "string"
                    },
                    {
                        "name": "proofPurpose",
                        "type": "string"
                    },
                    {
                        "name": "type",
                        "type": "string"
                    },
                    {
                        "name": "verificationMethod",
                        "type": "string"
                    }
                ],
                "VerifiableCredential": [
                    {
                        "name": "@context",
                        "type": "string[]"
                    },
                    {
                        "name": "credentialSchema",
                        "type": "CredentialSchema"
                    },
                    {
                        "name": "credentialSubject",
                        "type": "CredentialSubject"
                    },
                    {
                        "name": "issuanceDate",
                        "type": "string"
                    },
                    {
                        "name": "issuer",
                        "type": "string"
                    },
                    {
                        "name": "proof",
                        "type": "Proof"
                    },
                    {
                        "name": "type",
                        "type": "string[]"
                    }
                ]
            },
            "primaryType": "VerifiableCredential"
        }
    }
}

As a result, the schemas used to capture and store the VCs required several layers of interfaces with many subfields. Additionally, as mentioned before, the data itself would still require a process of normalization to create a presentation to be verified (for example the context key used in the ComposeDB schema would need to be replaced with @context).

This example also requires the end user to sign a transaction in their browser wallet each time a Verifiable Credential is issued, regardless of whether they wish to use a JWS or EIP-712 proof type. You can imagine how this would become cumbersome to a user who wants to issue several in one sitting.

The entirety of this setup seems like overkill for an assertion that uses such a simple data model. While it may be a necessity for architectures that require the W3C Verifiable Credential standard, those that don’t can ensure claim portability more simply. What if we instead leverage Ceramic’s core capabilities to more seamlessly design an architecture that accomplishes verifiable claims?

Example Application: Walk-Through

To illustrate how to do this, we’ve put together the following repository that this section of the article will reference:

https://github.com/ceramicstudio/did-session-claims

Before we begin to boot up a local deployment of our application, let’s talk through how our user authentication is configured, and how this relates to the portability of the claims we’ll produce.

Authentication

This example uses WalletConnect’s Web3Modal module to pass authentication logic down through the child components of this application (thus making it possible for us to use for Ceramic authentication). If you navigate into /src/pages/_app.tsx, you can see how our Wagmi configuration encompasses our ComposeDB wrapper, both of which encompass any React components we’ll render to our client.

Let’s take a deeper look at the ComposeDB wrapper imported and used at the top level of our application. If you open up /src/fragments/index.tsx, you’ll notice how we’re using a React createContext hook to make our ComposeDB client (with our casted runtime definition), an isAuthenticated value, and a keySession object available to any child components that import and call the exported useComposeDB method.

Take a look at what’s happening in the useEffect hook within the ComposeDB parent object:

export const ComposeDB = ({ children }: ComposeDBProps) => {
  function StartAuth(isAuthenticated: boolean = false) {
    const { data: walletClient, isError, isLoading } = useWalletClient();
    const [isAuth, setAuth] = useState(false);
    useEffect(() => {
      async function authenticate(
        walletClient: GetWalletClientResult | undefined
      ) {
        if (walletClient ) {
          const accountId = await getAccountId(walletClient, walletClient.account.address)
          console.log(walletClient.account.address, accountId)
          const authMethod = await EthereumWebAuth.getAuthMethod(walletClient, accountId)
          // create session
          const session = await DIDSession.get(accountId, authMethod, { resources: compose.resources })
          //set DID on Ceramic client
          await ceramic.setDID(session.did)
          await session.did.authenticate();
          console.log("Auth'd:", session.did.id);
          localStorage.setItem("did", session.did.id);
          keySession = session;
          setAuth(true);
        }
      }
      authenticate(walletClient);
    }, [walletClient]);
    return isAuth;
  }
  if (!isAuthenticated) {
    isAuthenticated = StartAuth();
  }
  return (
    
      {children}
    
  );
};

Since we know our application will be using a wallet client (given our use of Web3Modal), we’ll rely on Wagmi’s useWalletClient method to obtain an account ID and authentication method that we’ll later use as arguments to create a DID session. In effect, the result creates and authorizes a DID session key for the user, and authorizes that session key with capabilities to write only across the specific data models references in the {resources: compose.resources} object argument. This is the predominant way blockchain accounts can use Sign-In with Ethereum and CACAO for authorization (yielding a parent did:pkh that authorizes child session keys with limited and temporary capabilities).

Finally, you’ll see how our keySession variable is redefined as the current authenticated session, thus exposing it to any React components that import useComposeDB.

DID Class Capabilities

If you take a deeper look at the DID Class, you’ll notice a createJWS method available to use that creates a JWS-encoded signature over a specified payload which is signed by the currently authenticated DID. This is what we’ll use to create an explicit JWS-encoded signature over our “Trust Credential” payloads, allowing us to easily determine who signed the payload, and determine if the credential has been altered.

Data Models

If you navigate into the /composites directory, you’ll find the schema definition we’ll use to store this simple assertion type:

## our broadest claim type
interface VerifiableClaim
  @createModel(description: "A verifiable claim interface") {
  controller: DID! @documentAccount
  recipient: DID! @accountReference
}
type Trust implements VerifiableClaim
  @createModel(accountRelation: LIST, description: "A trust credential") {
  controller: DID! @documentAccount
  recipient: DID! @accountReference
  trusted: Boolean!
  jwt: String! @string(maxLength: 100000)
}

You’ll immediately notice that this definition is much more concise compared to the definitions we used for our Verifiable Credential or Attestation versions of the account trust credential. The recipient and trusted fields will be used to store the end values our users write to Ceramic, while a jwt field will hold our portable claim (which will be redundant against the plain values we’re writing in the two fields above it, but will provide the tamper-evident and portable qualities we’ll want).

Writing Our Credentials

If you imagine our Trust model without the jwt subfield, a write query to our ComposeDB client might look something like this:

const data = await compose.executeQuery(`
      mutation{
        createTrust(input: {
          content: {
            recipient: "${completeCredential.recipient}"
            trusted: ${completeCredential.trusted}
          }
        })
        {
          document{
            id
            recipient{
              id
            }
            trusted
          }
        }
      }
    `);

In our case, if you take a look into the /src/components/Claim.tsx component, you’ll notice how the saveBaseCredential method uses a few additional steps to create a JWT string prior to saving to Ceramic:

const saveBaseCredential = async () => {
    const credential = {
      recipient: `did:pkh:eip155:1:${destination.toLowerCase()}`,
      trusted: true,
    };
    if (keySession) {
      const jws = await keySession.did.createJWS(credential);
      const jwsJsonStr = JSON.stringify(jws);
      const jwsJsonB64 = Buffer.from(jwsJsonStr).toString("base64");
      const completeCredential = {
        ...credential,
        jwt: jwsJsonB64,
      };
      const data = await compose.executeQuery(`
      mutation{
        createTrust(input: {
          content: {
            recipient: "${completeCredential.recipient}"
            trusted: ${completeCredential.trusted}
            jwt: "${completeCredential.jwt}"
          }
        })
        {
          document{
            id
            recipient{
              id
            }
            trusted
            jwt
          }
        }
      }
    `);
    }
  };

First, notice how we define a credential object that includes only the essential values each assertion needs (the recipient, represented by their did:pkh, and the trusted value). Next, we’re using the createJWS method (mentioned previously) from our authenticated DID session (represented by the keySession yielded by the useComposeDB deconstruction at the top of the component). Finally, we Base64-encode the result and save it to Ceramic.

Notice how our credential doesn’t hard-code a controller (i.e. the account creating the assertion). As we’ll display later, the DID that was used to create the JWS can be easily extracted from the payload (along with the assertion values) and is therefore inherently included.

Getting Started

To begin, start by cloning the example app repository and installing your dependencies:

git clone https://github.com/ceramicstudio/did-session-claims && cd did-session-claims
npm install

We’ve set up a script for you in /scripts/commands.mjs to automate the creation of your ComposeDB server configuration and admin credentials. You’ll use it in the next step by running the following command in your terminal:

npm run generate

If you take a look at your admin_seed.txt and composedb.config.json files, you’ll now see that an admin seed and server configuration have been generated for you. Taking a closer look at your server configuration, you’ll notice the following key-value pair within the JSON object:

"ipfs": { "mode": "remote", "host": "http://localhost:5001" }

Unlike several of our other tutorials where we run IPFS in “bundled” mode, we’ll need our IPFS instance running separately for this walk-through so we can show how to obtain the session key from the DAG node from IPFS (to prove that the session key used to create the Ceramic commit is the same as the session key that invoked the createJWS method referenced above).

We’ll want to start our IPFS daemon next (with pubsub enabled):

ipfs daemon --enable-pubsub-experiment 

In a separate terminal, allow CORS (to enable our frontend to access data from the Kubo RPC API):

ipfs config --json API.HTTPHeaders.Access-Control-Allow-Origin '["'"$origin"'", "http://localhost:8080","http://localhost:3000"]'

Finally, start your application:

nvm use 20
npm run dev

This command will invoke the script created for you in /scripts/run.mjs responsible for deploying our ComposeDB schema on your local node, booting up a GraphiQL instance on port 5005, and starting your NextJS interface on port 3000.

Creating Assertions

If you navigate to http://localhost:3000 in your browser, you’ll be able to authenticate yourself by clicking the “Connect Wallet” button in the navigation (using Web3Modal). You’ll also be prompted with a second signature request – this one is from the /src/fragments/index.tsx context we discussed earlier, which creates the DID session, with limited scope to write data only against the VerifiableClaim interface and Trust types (shown in “resources” below):

An Easy Way to Create Verifiable Claims on Ceramic

You should now see the following in your browser:

An Easy Way to Create Verifiable Claims on Ceramic

As mentioned above, the corresponding React component can be found at /src/components/Claim.tsx. We’ve added some logs into the code for you, so we recommend inspecting your console in your browser as you follow along.

Go ahead and enter a dummy wallet address into the Address input field, and click “Generate Claim.” You’ll notice several new logs in your browser console – we’ll walk through these as we explain what’s happening.

Above we spoke about how we’re using the saveBaseCredential method to use our authenticated session key to sign our credential and save it to Ceramic using a mutation query off of our ComposeDB client. In your text editor, you’ll also see that we’re invoking validateBaseCredential at the end of this call.

Let’s take a look at what’s happening here:

const validateBaseCredential = async () => {
    const credential: any = await compose.executeQuery(
      `query {
        trustIndex(last: 1){
          edges{
            node{
              recipient{
                id
              }
              controller {
                id
              }
              trusted
              jwt
              id
            }
          }
        }
      }`
    );
    console.log(credential.data)
    if (credential.data.trustIndex.edges.length > 0) {
      //obtain did:key used to sign the credential
      const credentialToValidate = credential.data.trustIndex.edges[0].node.jwt;
      const json = Buffer.from(credentialToValidate, "base64").toString();
      const parsed = JSON.parse(json);
      console.log(parsed);
      const newDid = new DID({ resolver: KeyResolver.getResolver() });
      const result = await newDid.verifyJWS(parsed);
      const didFromJwt = result.didResolutionResult?.didDocument?.id;
      console.log('This is the payload: ', result.payload);
      //obtain did:key used to authorize the did-session
      const stream = credential.data.trustIndex.edges[0].node.id;
      const ceramic = new CeramicClient("http://localhost:7007");
      const streamData = await ceramic.loadStreamCommits(stream);
      const cid: CID | undefined = streamData[0] as CID;
      const cidString = cid?.cid;
			//obtain DAG node from IPFS
      const url = `http://localhost:5001/api/v0/dag/get?arg=${cidString}&output-codec=dag-json`;
      const data = await fetch(url, {
        method: "POST",
        headers: {
          "Content-Type": "application/json",
        },
      });
      const toJson: DagJWS = await data.json();
      const res = await newDid.verifyJWS(toJson);
      const didFromDag = res.didResolutionResult?.didDocument?.id;
      console.log(didFromJwt, didFromDag);
      if (didFromJwt === didFromDag) {
        console.log(
          "Valid: " +
            didFromJwt +
            " signed the JWT, and has a DID parent of: " +
            credential.data.trustIndex.edges[0].node.controller.id
        );
      } else {
        console.log("Invalid");
      }
    }
  };

First, we’re grabbing the most recent Trust model instance from our index (the one we just created). After we’ve extracted the value of the jwt field from this node and converted it back to a JSON object, notice how we eventually are able to extract the result of the id subfield after verifying the signature (which we subsequently assign to didFromJwt. This is the did:key that was used to sign the JWT.

Next, for the sake of proving ownership back to the parent did:pkh account, we’ll show how to obtain the did:key from the DAG node that was used to sign the Ceramic commit. After loading the stream commits based on the node’s StreamID and isolating the IPFS CID, we’re making a fetch request to our remote IPFS node running on port 5001, asking for the DAG node that’s tied to the Ceramic commit (for more information on the various API options, visit Kubo RPC API).

Similar to how we extracted the did:key from our jwt subfield, we can use the JSON result to verify the signature, and isolate the id field from the DID document. Finally, we log “Valid” after proving that the two DIDs are the same, proving that the did:key used to generate the portable JWS over the data is indeed the same as the controller of the Ceramic stream.

Finally, just for fun, we’ve set up an embedded GraphiQL instance on the same page for you – go ahead and submit the default query to view an instance result:

An Easy Way to Create Verifiable Claims on Ceramic

What have we learned?

Different teams will have different requirements related to verifiable claims. For some, the use of a standard like a W3C Verifiable Credential will be a requirement. Other teams might care more about ensuring claim portability which offers an easy way to tie signed data to a signer in a reliable way.

It’s also likely obvious to some readers that the logic we’ve set up to create a credential object and use the did:key to sign is not happening at the Ceramic protocol level but at the application level, which could be maliciously leveraged by applications to cause their users to generate falsy assertions (though the same could be said about the temporary write privileges a user grants to a certain set of ComposeDB model definitions when they sign and create their DID session).

At the very least, we hope you walk away from this article with a better understanding of how DID sessions work, how they’re created, and where the DAG nodes can be queried from a Ceramic node’s IPFS instance. There are trade-offs to be considered across all verifiable claim options, so we hope these ideas spark future creativity around new solutions.

Finally, if you’re interested in diving deeper into what can be done with DID sessions, check out this repository branch that shows how to allow users to generate VCs and EAS attestations using their session signature:

https://github.com/ceramicstudio/user-controlled-claims/tree/did-session

Navigate to our forum if you have questions or want to contribute to our RFC on verifiable claims!

Attestations vs. Credentials: Making Claims Interoperable

https://blog.ceramic.network/attestations-vs-credentials-making-claims-interoperable/

In our Verifiable Credentials tutorial and Data Provenance blog article, we discussed how some teams building on Ceramic use verifiable claims to make assertions about their users or allow their users to make claims about other users or non-user entities. At the time of writing this article, we’re seeing momentum build and excitement aggregate along particular claim standards (such as W3C Verifiable Credentials and attestations using the Ethereum Attestation Service). At the core of these standards are the features they unlock, not only for the users themselves but (importantly) the developers integrating them into their applications.

So what are these features? What are the differences in capabilities between standards currently being used? What trade-offs should developers consider before choosing one framework over another? Is there a world where they can be used together?

As you might’ve guessed, these are the topics we’ll cover below. More specifically, we will observe how these standards compare in the world of off-chain self-sovereign identity (SSI).

Common Features of Verifiable Claims

Schemas

If verifiable claims themselves are building blocks of online trust and reputation, the schemas those claims use provide their necessary structure. This ensures that two separate instances of a claim that fall into the same claim “family” can be predictably and reliably interpreted.

For example, if you were using a courseCompleted schema in an academic setting that used a required boolean field (representing whether or not a student passed a course) and a required string field (to identify the course by name), it would be hard to consume and interpret data instances that had additional unwanted fields or were altogether missing one of those two required fields.

Take a look at how the data schemas W3C section articulates the need for schemas in comparison to a similar section on the EAS docs site. While the implementation differs, both point out that schemas are essential for enforcing data conformity within a given collection.

Public Schema Registries

A schema registry is another commonality between these emergent standards, with the connecting tissue being a place to publicly reference and house schemas. In the world of decentralization and SSI, the registry plays a particularly compelling role because it enables disparate applications to build on shared data models due to the predictability of the shape of each data instance.

For instance, if Application A is a Ph.D. program, it can validate credentials created by Application B, a four-year university, and issue credentials to the same courseCompleted schema. Employer A cares about credentials issued by both Application A and Application B.

For entities issuing claims, these registries both serve the purpose of helping those teams ensure their data conforms to the schemas they utilize, while (just as importantly) functioning as a discovery mechanism for types of SSI. Entities consuming and validating those claims use those schema definitions to validate whether a user has cryptographically signed a given statement, making it essentially impossible for bad actors to tamper with those claims.

Registries like Serto service this need for applications using Verifiable Credentials, whereas Verax and EAS use immutable on-chain registries where anyone can deploy or reference schemas for attestations.

Credential Metadata + Body

The most obvious commonality across standards is the assertion itself (presumably in the form of the schema it is using), alongside other important metadata such as expiration dates or revocation status. The form and corresponding values of the credential itself, for example, are checked against the expected form of the schema by issuers, thus making it easy to omit nonconforming data altogether.

Proof (Signatures)

Whereas a public ledger of record like a blockchain verifiably reveals the behavior of a user address simply by observing its transaction history (thus simplifying the observability of on-chain attestations so long as a user’s account hasn’t been hacked), verifiable claims produced elsewhere must include cryptographic proofs that verify the provenance of those assertions. Similar to the role of public schema registries, attached proofs ensure that consuming entities can reliably verify that a given user has made some claim, agnostic of where the claim was stored or when it was produced.

For example, the EAS off-chain module incurs an EIP-712 signTypedData request that packages the contents of the attestation as method arguments, encodes, hashes, and requests a user signature over the hash of that data. Veramo’s EIP-712 module (just one method of producing a signature with Veramo) similarly requests and yields an EIP-712-conforming payload over the Verifiable Credential being signed.

Finally, public data ledgers like Ceramic are architected on mutable data streams, allowing users to create and mutate their streams as much as they desire under a single authenticated session. Users might choose to make claims about themselves or others without an additional attestation or framework layer.

Oamo, for example, is a platform build on Ceramic that helps individuals own and monetize their data by granting access to it to other organization and companies in exchange for rewards. Oamo generates claims authored by the application itself (as opposed to user-authored claims). For example, Oamo would issue a certain claim type after a user has connected their Twitter account. Unlike teams like Gitcoin that are storing Verifiable Credentials on Ceramic, Oamo uses predefined ComposeDB schemas to provide claim structure, with no additional Verifiable Credential or attestation frameworks or layering on top. You can view one of these model definitions here.

Similar to a blockchain, if a data consumer is aware that claims are being authored by users they care about in this type of environment, they can choose to index on those streams and can rest assured of the provenance and lineage of that user data. Ease of portability to a different environment later on, however, might be an issue under this paradigm.

Decentralized Identifiers

If you’re reading this, chances are you already know the whole spiel on DIDs, so we won’t spend too much time here. The important key to note here is how these identifiers are viewed as an essential pillar of SSI. Similar to how issuers and consumers of verifiable claims containing proofs can collaborate on shared data due to the inherent characteristics of those claims, DIDs ensure that individual users can take their identity from one context to another without the permission or awareness of some central actor.

Choosing between Claim Types

Despite their similarities, developers who choose to incorporate claims into their applications must consider the trade-offs and differences in experience each method offers. For the purpose of this comparison, I’m intentionally isolating “off-chain” claims (produced either by signed meta transactions, or claims generated on a public data ledger like Ceramic, both of which incur no cost to the issuer).

Attestations vs. Credentials: Making Claims Interoperable

Given this brief comparison, it’s easy to understand why many teams choose to build high-portability claim types like Verifiable Credentials on top of a public data network like Ceramic. This combination works in complimentary fashion to enable easy portability and verifiability (when using a framework), alongside the availability and discoverability benefits of a network.

Making Claim Standards Interoperable

Given the similarities between the two tamper-evident claim methods, developers might want to give their users optionality by allowing them to choose which one to use. For teams building on Ceramic and utilizing ComposeDB, they can offer their users this flexibility without degrading the efficiency of their query logic. By using interfaces, developers can define a “family” of data under a verifiableClaim (or other preferred name) interface, and define the low-hanging similarities between the two standards as interface subfields. The standard-specific subfields can be used as differentiators defined within higher-level types.

For example, let’s say an application wants to allow their users to create accountTrust claims that point to a DID and contain a boolean value representing trust or distrust (here’s a VC definition and EAS schema that do exactly this). Here’s a resulting payload from each:

// Credential
{
    "issuer": "did:pkh:eip155:1:0x06801184306b5eb8162497b8093395c1dfd2e8d8",
    "@context": [
        "https://www.w3.org/2018/credentials/v1",
        "https://beta.api.schemas.serto.id/v1/public/trusted-reviewer/1.0/ld-context.json"
    ],
    "type": [
        "VerifiableCredential",
        "Trusted"
    ],
    "credentialSchema": {
        "id": "https://beta.api.schemas.serto.id/v1/public/trusted/1.0/json-schema.json",
        "type": "JsonSchemaValidator2018"
    },
    "credentialSubject": {
        "isTrusted": true,
        "id": "did:pkh:eip155:1:0xcc2158d7e1b0fffd4db6f51e35f05e00d8fe30b2"
    },
    "issuanceDate": "2023-12-05T21:03:03.061Z",
    "proof": {
        "verificationMethod": "did:pkh:eip155:1:0x06801184306b5eb8162497b8093395c1dfd2e8d8",
        "created": "2023-12-05T21:03:03.061Z",
        "proofPurpose": "assertionMethod",
        "type": "EthereumEip712Signature2021",
        "proofValue": "0x47fadf4bab9c0d111b6bf304eb2c72e6419c636f7b117761ce5cf4926a79074e073e2560b90d78230deac06a7afc705813f3f403fa51967e2da0e7783d4dae0d1b",
        "eip712": {
            "domain": {
                "chainId": 1,
                "name": "VerifiableCredential",
                "version": "1"
            },
            "types": {
                "EIP712Domain": [
                    {
                        "name": "name",
                        "type": "string"
                    },
                    {
                        "name": "version",
                        "type": "string"
                    },
                    {
                        "name": "chainId",
                        "type": "uint256"
                    }
                ],
                "CredentialSchema": [
                    {
                        "name": "id",
                        "type": "string"
                    },
                    {
                        "name": "type",
                        "type": "string"
                    }
                ],
                "CredentialSubject": [
                    {
                        "name": "id",
                        "type": "string"
                    },
                    {
                        "name": "isTrusted",
                        "type": "bool"
                    }
                ],
                "Proof": [
                    {
                        "name": "created",
                        "type": "string"
                    },
                    {
                        "name": "proofPurpose",
                        "type": "string"
                    },
                    {
                        "name": "type",
                        "type": "string"
                    },
                    {
                        "name": "verificationMethod",
                        "type": "string"
                    }
                ],
                "VerifiableCredential": [
                    {
                        "name": "@context",
                        "type": "string[]"
                    },
                    {
                        "name": "credentialSchema",
                        "type": "CredentialSchema"
                    },
                    {
                        "name": "credentialSubject",
                        "type": "CredentialSubject"
                    },
                    {
                        "name": "issuanceDate",
                        "type": "string"
                    },
                    {
                        "name": "issuer",
                        "type": "string"
                    },
                    {
                        "name": "proof",
                        "type": "Proof"
                    },
                    {
                        "name": "type",
                        "type": "string[]"
                    }
                ]
            },
            "primaryType": "VerifiableCredential"
        }
    }
}
// Attestation
{
    "domain": {
        "name": "EAS Attestation",
        "version": "0.26",
        "chainId": 1,
        "verifyingContract": "0xA1207F3BBa224E2c9c3c6D5aF63D0eb1582Ce587"
    },
    "primaryType": "Attest",
    "message": {
        "recipient": "0xcc2158d7e1b0fffd4db6f51e35f05e00d8fe30b2",
        "expirationTime": 0,
        "time": 1701810283,
        "revocable": true,
        "version": 1,
        "nonce": 0,
        "schema": "0x776c6c1d76055522753787b3abfdbeeff262cda35eebecaf83059b738698ef62",
        "refUID": "0x0000000000000000000000000000000000000000000000000000000000000000",
        "data": "0x0000000000000000000000000000000000000000000000000000000000000001"
    },
    "types": {
        "Attest": [
            {
                "name": "version",
                "type": "uint16"
            },
            {
                "name": "schema",
                "type": "bytes32"
            },
            {
                "name": "recipient",
                "type": "address"
            },
            {
                "name": "time",
                "type": "uint64"
            },
            {
                "name": "expirationTime",
                "type": "uint64"
            },
            {
                "name": "revocable",
                "type": "bool"
            },
            {
                "name": "refUID",
                "type": "bytes32"
            },
            {
                "name": "data",
                "type": "bytes"
            }
        ]
    },
    "signature": {
        "v": 27,
        "r": "0x9957ad308ba7e4355092b66e3fd26f56e35ea93f6e5149304a4a063ff4732efb",
        "s": "0x38aa8ae9f09dbc050b5bc7f8ecd9ed92bf7e3cdca34feb3e2789fa22b298b202"
    },
    "uid": "0xfd7bad067b12dc55865cdd3c9c46b331a31c6a872bd75ca5fc6a8a623134f28b",
    "account": "0x06801184306b5eb8162497b8093395c1dfd2e8d8"
}

You’ll notice a few things right away that underline differences we’ll have to account for when saving to ComposeDB. For example, the issuer and [credentialSubject.id]() fields within our Verifiable Credential use DIDs, whereas the account (akin to issuer) and message.recipient within our attestation use Eth addresses. The signatures are also different in observing the signature object within our Attestation instance vs. our proof.proofValue credential field.

Let’s assume we want to make these work as-is without deploying new Verifiable Credential or Attestation schemas.

Layer 0 (Broadest)

Since we’re allowing our users to both author their own claims and corresponding Ceramic documents, our broadest-level interface can therefore tie together the claim creator with the claim recipient:

## our broading claim type
interface VerifiableClaim 
@createModel(description: "A verifiable claim interface")
{
  controller: DID! @documentAccount
  recipient: DID! @accountReference
}

We will need to account for the differences between the payloads and how they must be saved. For example:

//attestations
recipient: "${"did:pkh:eip155:1:" + attestation.message.recipient}"
//VCs
recipient: "${credential.credentialSubject.id}"

Layer 1

Our next level down entrypoint can therefore dictate the claim type that was used:

## our overarching VC interface that acts agnostic of our proof type
interface VerifiableCredential implements VerifiableClaim
  @createModel(description: "A verifiable credential interface")
{
  controller: DID! @documentAccount
  recipient: DID! @accountReference
  issuer: Issuer! 
  context: [String!]! @string(maxLength: 1000) @list(maxLength: 100)
  type: [String!]! @string(maxLength: 1000) @list(maxLength: 100)
  credentialSchema: CredentialSchema!
  credentialStatus: CredentialStatus
  issuanceDate: DateTime!
  expirationDate: DateTime
}
## our overarching Attestation interface that acts agnostic of our proof type
interface Attestation implements VerifiableClaim
@createModel(description: "An attestation interface")
{
  controller: DID! @documentAccount
  recipient: DID! @accountReference
  attester: DID! @accountReference
  trusted: Boolean!
  uid: String! @string(minLength: 66, maxLength: 66)
  schema: String! @string(minLength: 66, maxLength: 66)
  verifyingContract: String! @string(minLength: 42, maxLength: 42)
  easVersion: String! @string(maxLength: 5)
  version: Int!
  chainId: Int! 
  r: String! @string(minLength: 66, maxLength: 66)
  s: String! @string(minLength: 66, maxLength: 66)
  v: Int! 
  types: [Types] @list(maxLength: 100)
  expirationTime: DateTime
  revocationTime: DateTime
  refUID: String @string(minLength: 66, maxLength: 66)
  time: Int! 
  data: String! @string(maxLength: 1000000)
}

You can start to see how these parent definitions open up the possibilities of running queries like this:

query VerifiableClaims {
  verifiableClaimIndex(last: 10) {
    edges {
      node {
        recipient {
          id
        }
        controller {
          id
        }
        ... on VerifiableCredential {
          issuer {
            id
          }
        }
        ... on Attestation {
          attester {
            id
          }
        }
      }
    }
  }
}

Layer 2

We can add a third interface layer that accounts for the differences between proof types:

## generalized JWT proof interface for VCs
interface VCJWTProof implements VerifiableClaim & VerifiableCredential 
  @createModel(description: "A verifiable credential interface of type JWT")
{
  controller: DID! @documentAccount
  recipient: DID! @accountReference
  issuer: Issuer! 
  context: [String!]! @string(maxLength: 1000) @list(maxLength: 100)
  type: [String!]! @string(maxLength: 1000) @list(maxLength: 100)
  credentialSchema: CredentialSchema!
  credentialStatus: CredentialStatus
  issuanceDate: DateTime!
  expirationDate: DateTime
  proof: ProofJWT!
}
type ProofJWT {
  type: String! @string(maxLength: 1000)
  jwt: String! @string(maxLength: 100000)
}
## generalized EIP712 proof interface for VCs
interface VCEIP712Proof implements VerifiableClaim & VerifiableCredential 
  @createModel(description: "A verifiable credential interface of type EIP712")
{
  controller: DID! @documentAccount
  recipient: DID! @accountReference
  issuer: Issuer! 
  context: [String!]! @string(maxLength: 1000) @list(maxLength: 100)
  type: [String!]! @string(maxLength: 1000) @list(maxLength: 100)
  credentialSchema: CredentialSchema!
  credentialStatus: CredentialStatus
  issuanceDate: DateTime!
  expirationDate: DateTime
  proof: ProofEIP712!
}
type Issuer {
  id: String! @string(maxLength: 1000)
  name: String @string(maxLength: 1000)
}
type CredentialStatus {
  id: String! @string(maxLength: 1000)
  type: String! @string(maxLength: 1000)
}
type CredentialSchema {
  id: String! @string(maxLength: 1000)
  type: String! @string(maxLength: 1000)
}
type ProofEIP712 {
  verificationMethod: String! @string(maxLength: 1000)
  created: DateTime! 
  proofPurpose: String! @string(maxLength: 1000)
  type: String! @string(maxLength: 1000)
  proofValue: String! @string(maxLength: 1000)
  eip712: EIP712!
}
type EIP712 {
    domain: Domain! 
    types: ProofTypes!
    primaryType: String! @string(maxLength: 1000)
}
type Types {
  name: String! @string(maxLength: 1000)
  type: String! @string(maxLength: 1000)
}
type ProofTypes {
    EIP712Domain: [Types!]! @list(maxLength: 100)
    CredentialSchema: [Types!]! @list(maxLength: 100)
    CredentialSubject: [Types!]! @list(maxLength: 100)
    Proof: [Types!]! @list(maxLength: 100)
    VerifiableCredential: [Types!]! @list(maxLength: 100)
}
type Domain {
  chainId: Int!
  name: String! @string(maxLength: 1000)
  version: String! @string(maxLength: 1000)
}

We can therefore start to query these fields:

query VerifiableClaims {
  verifiableClaimIndex(last: 10) {
    edges {
      node {
        recipient {
          id
        }
        controller {
          id
        }
        ... on VerifiableCredential {
          issuer {
            id
          }
          ... on VCEIP712Proof {
            context
            proof {
              type
              proofValue
              created
              verificationMethod
              eip712 {
                primaryType
                types {
                  EIP712Domain {
                    name
                    type
                  }
                  CredentialSchema {
                    name
                    type
                  }
                  CredentialSubject {
                    name
                    type
                  }
                  VerifiableCredential {
                    name
                    type
                  }
                }
                domain {
                  chainId
                  name
                  version
                }
              }
            }
          }
        }
        ... on Attestation {
          attester {
            id
          }
        }
      }
    }
  }
}

Layer 3

Our next layer can include the specific values relevant to the claim. We’ve maxed out the layers we want to use for our Attestations (so we’ll define this as a type), while our Verifiable Credentials will detach the proof type from the claim:

interface AccountTrustCredential implements VerifiableClaim & VerifiableCredential  
  @createModel(description: "A verifiable credential interface for account trust credentials")
{
  controller: DID! @documentAccount
  recipient: DID! @accountReference
  issuer: Issuer! 
  context: [String!]! @string(maxLength: 1000) @list(maxLength: 100)
  type: [String!]! @string(maxLength: 1000) @list(maxLength: 100)
  credentialSchema: CredentialSchema!
  credentialStatus: CredentialStatus
  issuanceDate: DateTime!
  expirationDate: DateTime
  credentialSubject: AccountTrustSubject! 
}
type AccountTrustSubject
{
  id: DID! @accountReference
  trusted: Boolean! 
}
type AccountAttestation implements VerifiableClaim & Attestation 
  @createModel(accountRelation: LIST, description: "An account attestation")
  @createIndex(fields: [{ path: ["time"] }])
{
  controller: DID! @documentAccount
  recipient: DID! @accountReference
  attester: DID! @accountReference
  uid: String! @string(minLength: 66, maxLength: 66)
  schema: String! @string(minLength: 66, maxLength: 66)
  verifyingContract: String! @string(minLength: 42, maxLength: 42)
  easVersion: String! @string(maxLength: 5)
  version: Int!
  chainId: Int! 
  r: String! @string(minLength: 66, maxLength: 66)
  s: String! @string(minLength: 66, maxLength: 66)
  v: Int! 
  types: [Types] @list(maxLength: 100)
  expirationTime: DateTime
  revocationTime: DateTime
  refUID: String @string(minLength: 66, maxLength: 66)
  time: Int! 
  data: String! @string(maxLength: 1000000)
  trusted: Boolean!
}

Incorporating into our querying:

query VerifiableClaims {
  verifiableClaimIndex(last: 10) {
    edges {
      node {
        recipient {
          id
        }
        controller {
          id
        }
        ... on VerifiableCredential {
          issuer {
            id
          }
          ... on AccountTrustCredential {
            credentialSubject {
              id {
                id
              }
              trusted
            }
          }
        }
        ... on Attestation {
          ... on AccountAttestation {
            attester {
              id
            }
          }
        }
      }
    }
  }
}

Layer 4 (most specific)

Finally, since we’ve defined our AccountTrustCredential interface as agnostic of our proof type, our final layer will define types that differentiate based on proof:

type AccountTrustCredential712 implements VerifiableClaim & VerifiableCredential & AccountTrustCredential & VCEIP712Proof 
  @createModel(accountRelation: LIST, description: "A verifiable credential of type EIP712 for account trust credentials")
  @createIndex(fields: [{ path: "issuanceDate" }])
  @createIndex(fields: [{ path: "trusted" }]) {
  controller: DID! @documentAccount
  recipient: DID! @accountReference
  issuer: Issuer! 
  context: [String!]! @string(maxLength: 1000) @list(maxLength: 100)
  type: [String!]! @string(maxLength: 1000) @list(maxLength: 100)
  credentialSchema: CredentialSchema!
  credentialStatus: CredentialStatus
  issuanceDate: DateTime!
  expirationDate: DateTime
  credentialSubject: AccountTrustSubject! 
  trusted: Boolean!
  proof: ProofEIP712!
}
type AccountTrustCredentialJWT implements VerifiableClaim & VerifiableCredential & AccountTrustCredential & VCJWTProof 
  @createModel(accountRelation: LIST, description: "A verifiable credential of type JWT for account trust credentials")
  @createIndex(fields: [{ path: "issuanceDate" }])
  @createIndex(fields: [{ path: "trusted" }]) {
  controller: DID! @documentAccount
  recipient: DID! @accountReference
  issuer: Issuer! 
  context: [String!]! @string(maxLength: 1000) @list(maxLength: 100)
  type: [String!]! @string(maxLength: 1000) @list(maxLength: 100)
  credentialSchema: CredentialSchema!
  credentialStatus: CredentialStatus
  issuanceDate: DateTime!
  expirationDate: DateTime
  credentialSubject: AccountTrustSubject! 
  trusted: Boolean!
  proof: ProofJWT!
}

Putting it all together in a query example:

query VerifiableClaims {
  verifiableClaimIndex(last: 10) {
    edges {
      node {
        recipient {
          id
        }
        controller {
          id
        }
        ... on VerifiableCredential {
          issuer {
            id
          }
          ... on AccountTrustCredential712 {
            proof {
              type
              proofValue
              created
              verificationMethod
              eip712 {
                primaryType
                types {
                  EIP712Domain {
                    name
                    type
                  }
                  CredentialSchema {
                    name
                    type
                  }
                  CredentialSubject {
                    name
                    type
                  }
                  VerifiableCredential {
                    name
                    type
                  }
                }
                domain {
                  chainId
                  name
                  version
                }
              }
            }
          }
        }
        ... on Attestation {
          ... on AccountAttestation {
            r
            s
            v
            version
            verifyingContract
            easVersion
            trusted
            attester {
              id
            }
            recipient {
              id
            }
          }
        }
      }
    }
  }
}

You can begin to see how developers who might want to account for multiple claim types, each with sub-options offering different proof types, while still exposing these as queryable under an overarching claim family can start to do interesting things with precision using interfaces.

Developers will still need to keep the reconstruction and validation mechanics in mind, given the deconstruction and alterations made to the data in order to save to ComposeDB and make it interoperable under a VerifiableClaim interface, and the client-side work required to reconstruct the claim into an acceptable format to be verified.

To view and experiment with the example code yourself, visit and clone this repository.

Data Provenance: ComposeDB as an Authenticated Database

https://blog.ceramic.network/data-provenance-composedb-as-an-authenticated-database/

Data provenance, typically used within the broader context of data lineage, refers to the source or first occurrence of a given piece of data. As a concept, data provenance (together with data lineage) is positioned to provide validity and encourage confidence related to the origin of data, whether or not data has mutated since its creation, and who the original publisher is, among other important details.

From tracking the origin of scientific studies to big banks complying with financial regulations, data provenance plays an integral role in supporting the authenticity and integrity of data.

Databases and Data Provenance

When it comes to databases, you can start to imagine how critical data provenance is when organizing and tracking files in a data warehouse or citing references from within a curated database. For consumer applications (take social media platforms such as Twitter, for example) that build entire advertising business models around the engagement derived from user-generated content, the claim of unaltered authorship (apart from account hacks) of a given Tweet is a guarantee made by the platform to its users and investors—trust cannot be built without it.

With the implications of data provenance in mind, organizations that rely on centrally controlled data stores within the context of consumer applications are constantly evolving security protocols and authentication measures to safeguard both their users and business from attacks that could result in data leaks, data alterations, data wipes, and more. However, so long as potential attack vectors and adequate user authentication are accounted for, these organizations benefit from inherent assurances related to the authenticity of incoming writes and mutations—after all, their servers are the agents performing these edit actions.

Data Provenance in Peer-to-Peer Protocols

But what about peer-to-peer data protocols and the applications built on them? How do topics such as cryptographic hashing, digital signatures, user authentication, and data origin verifiability in decentralized software coincide with data provenance and data lineage?

This article is meant to provide an initial exploration of how and where these topics converge and build a specific understanding of the overlap between these ideas and the technical architecture, challenges, qualities, and general functionality of ComposeDB on Ceramic. ComposeDB, built on Ceramic, is a decentralized graph database that uses GraphQL to offer developers a familiar interface for interacting with data stored on Ceramic.

The following article sections will set out to help accomplish the goals outlined above.

Smart Contract-Supported Blockchains

Blockchains that contain qualities analogous to a distributed state machine (such as those compatible with the Ethereum Virtual Machine) operate based on a specific set of rules that determine how the machine state changes from block to block. In viewing these systems as traversable open ledgers of data, accounts (both smart contract accounts and those that are externally owned) generate histories of transactions such as token transfers and smart contract interactions, all of which are publicly consumable without the need for permission.

What does this mean in the context of data provenance? Given the viability of public-key infrastructure, externally owned accounts prevent bad actors from broadcasting fake transactions because the sender’s identity (public access) is publicly available to verify. When it comes to the transactions themselves (both account-to-account and account-to-contract), the resulting data that’s publicly stored (once processed) includes information about both who acted, as well as their signature.

Transaction verifiability, in this context, relies on a block finalization process that requires validator nodes to consume multiple transactions, verify them, and include them in a block. Given the deterministic nature of transactions, participating nodes can correctly compute the state for themselves, therefore eventually reaching a consistent state about the transactions.

While there are plenty of nuances and levels of depth we could explore related to the architecture of these systems, the following are the most relevant features related to data provenance:

  • The verifiable origin of each transaction represents the data we care about related to provenance
  • Transactions are performed by externally owned accounts and contract accounts, both of which attach information about the transaction itself and who initiated it
  • Externally owned accounts rely on cryptographic key pairs

ComposeDB vs. Smart Contract-Supported Blockchains

There is plenty to talk about when comparing ComposeDB (and the Ceramic Network more broadly) to chains like Ethereum; however, for this post, we’ll focus on how these qualities relate to data provenance.

Controlling Accounts

Ceramic uses the Decentralized Identifier standard for user accounts (DID PKH and Key DID are supported in production). Similar to blockchains, they require no centralized party or registry. Additionally, both PKH DIDs and Key DIDs ultimately rely on public key infrastructure (PKH DIDs enable blockchain accounts to sign, authorize, and authenticate transactions, while Key DIDs expand cryptographic public keys into a DID document).

Sign in With Ethereum (SIWE)

Like chains such as Ethereum, Ceramic supports authenticated user sessions with SIWE. The user experience then diverges slightly when it comes to signing transactions (outlined below).

Signing Transactions

While externally-owned accounts must manually sign individual transactions on chains like Ethereum (both when interacting with a smart contract or sending direct transfers), data in Ceramic (or streams) are written by authenticated accounts during a timebound session, offering a familiar, Web2-like experience. The root account (your blockchain wallet if using Ceramic’s SIWE capability, for example) generates a temporary child account for each application environment with tightly-scoped permissions, which then persists for a short period in the user’s browser. For developers familiar with using JWTs in Node.js to authenticate users, this flow should sound familiar.

This capability is ideal for a protocol meant to support mutable data with verifiable origin, thus allowing for multiple writes to happen over a cryptographically authorized period (and with a signature attached to each event that can be validated) without impeding the user’s experience by requiring manual signs for each write.

Consensus

Ceramic relies on event streams that offer a limited consensus model that makes it possible for a given stream to allow multiple parallel histories while ensuring any two parties consuming the same events for a stream will arrive at the same state. What this means is that all streams and their corresponding tips (latest events within their event logs) are not known by all participants at any given point in time.

However, a mechanism known as the Ceramic Anchor Service (CAS) is responsible for batching transactions across the network into a Merkle tree and regularly publishing its root in a single transaction to Ethereum. Therefore, Ceramic does offer a consensus on the global ordering of Ceramic transactions.

Immutability

Just as smart contracts provide a deterministic structure that dictates how users can interact with them (while guaranteeing they will not change once deployed), ComposeDB schemas are also immutable, offering guarantees around the types of data a given model can store. When users write data using these definitions, each resulting model instance document can forever only be altered by accounts that created it (or grant limited permission to another account to do so), and can only make changes that conform to the schema’s definition.

Finally, every stream is comprised of an event log of one or more commits, thus making it easy for developers to extract not only the provenance of the stream’s data, based on the cryptographic signature of the account that created it, but also the stream’s data lineage by traversing through the commit history to observe how the data mutated over time.

Publicly Verifiable

Similar to networks like Ethereum, the Ceramic Network is public by default, allowing any participating nodes to read any data on any stream. While the values of the data may be plaintext or encrypted, contingent on the objectives of the applications using them, anyone can verify the cryptographic signatures that accompany the individual event logs (explained above).

Centralized Databases

The broad assumption I’ll line up for this comparison is that a traditional “Web2” platform uses a sandboxed database to store, retrieve, and write data on behalf of its users. Apart from the intricate architecture strategies used to accomplish this at scale with high performance, most of these systems rely on the assurances that their servers alone have sole authority to perform writes. Individual user accounts can be hacked into via brute force or socially engineered attacks, but as long as the application’s servers are not compromised, the data integrity remains intact (though requiring participants to trust a single point of failure).

ComposeDB vs. Centralized Databases

If this article set out to compare ComposeDB to traditional databases in the context of functionality and performance, we’d likely discuss a higher degree of similarities rather than differences; however, when comparing ComposeDB to the paradigm of a “traditional” database setup, in the context of data provenance, we find that the inverse holds true based on much of what was discussed in the previous section.

Embedded Cryptographic Proof

As previously discussed, all valid events in Ceramic include a required DAGJWS signature derived from the stream’s controlling account. While it’s possible (though logically unwise) that an application using a centralized database could fabricate data related to the accounts of its users, event streams in Ceramic are at all times controlled by the account that created the stream. Even if a Ceramic account accidentally delegates temporary write access to a malicious application that then authors inaccurate data on the controller’s behalf, the controlling account never loses admin access and can revert or overwrite those changes.

Public Verifiability

Unlike Ceramic, the origin of data (along with most accompanying information) is not accessible by design when using a centralized database, at least not in a permissionless way. The integrity of the data within a “traditional” database must therefore be assumed based on other factors requiring trust between the application’s users and the business itself. This architecture is what enables many of the business models these applications use that ultimately have free reign over how they leverage or sell user data.

Conversely, business models like advertising can be (and are currently being) built on Ceramic data, which flips this paradigm on its head. Individual users have the option to encrypt data they write to the network and have an array of tools at their disposal to enable programmatic or selective read access based on conditions they define. Businesses that want to access this data can therefore work directly with the users themselves to define the conditions under which their data can be accessed, putting the sovereignty of that data into individual users’ hands.

Timestamping and Anchoring

While in a private, sandboxed database, development teams can implement a variety of methods to timestamp entries, those teams don’t have to worry about trusting other data providers in a public network to be competent and non-malicious. Conversely, data in Ceramic leverages the IPLD Timestamp Proof specification which involves frequent publishing of the root of a Merkle tree to the blockchain with the sets of IPLD content identifiers as the tree’s leaves which represent Ceramic data. While the underlying data structure (event log) of each stream will preserve the ordering of its events with specific events pointing to the prior one in the stream, the anchoring process allows developers to use event timestamping in a decentralized, trustless way.

Verifiable Credentials

Verifiable credentials under the W3C definition unlock the ability for verifiable claims to be issued across a virtually limitless set of contexts, with the guarantee that they can later be universally verified in a cryptographically secure way. This standard relies on several key features (below are only a few of them):

  • Verifiable Data Registry: A publicly available repository of the verifiable credential schemas one might choose to create instances of
  • Decentralized Identifiers: Verifiable credentials rely on DIDs to both identify the subject of a claim, as well as the cryptographic proof created by the issuer
  • Core Data Model: These credentials follow a standard data model that ensures that the credential’s body (made up of one or more claims about a given entity) is inherently tamper-evident, given the fact that the issuer generates a cryptographic proof that guarantees both the values of the claims themselves and the issuer’s identity

For example, an online education platform may choose to make multiple claims about a student’s performance and degree of completion related to a specific student and a specific course they are taking, all of which could be wrapped up into one verifiable credential. While multiple proof formats could be derived (EIP712 Signature vs. JWTs), the provenance of the credential is explicit.

However, unlike blockchains and databases, verifiable credentials are not storage networks themselves and therefore can be saved and later retrieved for verification purposes in a wide variety of ways.

ComposeDB vs. Verifiable Credentials (and other claim formats)

I mentioned earlier that schema definitions (once deployed to the Ceramic network) offer immutable and publicly available data formats that enforce constraints for all subsequent instances. For example, anyone using ComposeDB can deploy a model definition to assert an individual’s course completion and progress, and similarly, any participants can create document instances within that model’s family. Given the cryptographic signatures and immutable model instance controller identity (that’s automatically attached to each Ceramic stream commit discussed above), you can start to see how the qualities verifiable credentials are set out to provide, like tamper-evident claims and credential provenance, are inherent to ComposeDB.

Tamper-Proof

Like a verifiable credential, each commit within a given Ceramic stream is immutable once broadcasted to the network. Within the context of a model instance document within ComposeDB, while the values within the document are designed to be mutated over time, each commit is publicly readable, tamper-evident, and cryptographically signed.

Inherent Origin

We’ve discussed this extensively above—each event provides publicly-verifiable guarantees about the identity of the controlling account.

Publicly Available

Unlike verifiable credentials that offer just a standard, ComposeDB allows developers to both define claim standards (using schema definitions), as well as public availability for those instances to be read and confirmed by other network participants. ComposeDB is therefore also a public schema registry in itself.

Trustworthiness

In addition to the specific comparisons to other data storage options and verifiable claim standards, what qualities does ComposeDB offer that enable anyone to audit, verify, and prove the origin of data it contains? While parts of this section may be slightly redundant with the first half of this article, we’ll take this opportunity to tie these concepts together in a more general sense.

Auditable, Verifiable, and Provable

For trust to be equitably built in a peer-to-peer network, the barrier to entry to be able to run audits must be sufficiently low, concerning both cost and complexity. This holds especially true when auditing and validating the origin of data within the network. Here are a few considerations and trade-offs related to ComposeDB’s auditability.

No Cost Barrier With Open Access to Audit

Developers building applications on ComposeDB do not need to worry about cost-per-transaction fees related to the read/write activity their users perform. They will, however, need to architect an adequate production node configuration (that should be built around the volume a given application currently has and how it expects to grow over time), which will have separate network-agnostic costs.

This also holds for auditors (or new applications who want to audit data on Ceramic before building applications on that data). Any actor can spin up a node without express network permissions, discover streams representing data relevant to their business goals, and begin to index and read them. Whether an organization chooses to build on ComposeDB or directly on its underlying network (Ceramic), as long as developers understand the architecture of event logs (and specifically how to extract information like cryptographic signatures and controlling accounts), they will have fully transparent insight into the provenance of a given Ceramic dataset.

Trade-Off: Stream Discoverability

While fantastic interfaces, such as s3.xyz, have been built to improve data and model discoverability within the Ceramic Network, one challenge Ceramic faces as it continues to grow is how to further enable developers to discover (and build on) existing data. More specifically, while it’s easy to explain to developers the hypothetical benefits of data composability and user ownership in the context of an open data network (such as the data provenance-related qualities we’ve discussed in this post), showing it in action is a more difficult feat.

Structured

The Ceramic Network also exists in an existing, non-conforming territory that does not fit neatly into the on- or off-chain realm. Just as the Ethereum Attestation Service (EAS) mentions on its Onchain vs. Offchain page, a “verifiable data ledger” category of decentralized storage infrastructure is becoming increasingly appealing to development teams who want to gain the benefits of both credible decentralization and maximum performance, especially when dealing with data that’s meant to mutate over time.

As we discussed above, here’s a refresher on key insights into ComposeDB’s structure, and how these impact the provenance of its data.

Ceramic Event Logs

Ceramic relies on a core data structure called an event log, which combines cryptographic proofs (to ensure immutability and enable authentication via DID methods) and IPLD for hash-linked data. All events on the network rely on this underlying data structure, so whether developers are building directly on Ceramic or using ComposeDB, teams always have access to the self-certifying log that they can verify, audit, and use to validate provenance.

ComposeDB Schema Immutability

Developers building on ComposeDB also benefit from the assurances that schema definitions provide, based on the fact that they cannot be altered once deployed. While this may be an issue for some teams who might need regular schema evolution, other teams leverage this quality as a means to ensure constant structure around the data they build on. This feature therefore provides a benefit to teams who care strongly about both data provenance and lineage – more specifically, the origin (provenance) can be derived from the underlying data structure, while the history of changes (lineage) must conform to the immutable schema definition, and is always available when accessing the commit history.

A Decentralized Data Ledger

Finally, Ceramic nodes support the data on Ceramic and the protocol—providing applications access to the network. For ComposeDB nodes, this configuration includes an IPFS service to enable access to the underlying IPLD blocks for event streams, a Ceramic component to enable HTTP API access and networking (among other purposes), and PostgreSQL (for indexing model instances in SQL and providing a read engine). All Ceramic events are regularly rolled into a Merkle tree and the root is published to the Ethereum blockchain.

Within the context of data provenance, teams who wish to traverse these data artifacts back to their sources can use various tools to publicly observe these components in action (for example, the Ceramic Anchor Service on Etherscan), but must be familiar with Ceramic’s distributed architecture to understand what to look for and how these reveal the origins of data.

Trade-Off: Complexity

There’s no question that the distributed nature of the Ceramic Network can be complex to comprehend, at least at first. This is a common problem within P2P solutions that uphold user-data sovereignty and rely on consensus mechanisms, especially when optimizing for performance.

Trade-Off: Late Publishing Risks

As described on the Consensus page in the Ceramic docs, all streams and their potential tips are not universally knowable in the form of a global state that’s available to all participants at any point in time. This setup does allow for individual participants to intentionally (or accidentally) withhold some events while publishing others, otherwise known as engaging in ‘selective publishing’. If you read into the specifics and the hypothetical scenario outlined in the docs, you’ll quickly learn that this type of late publishing attack is illogical in practice since streams can only have one controlling user, so that user would need to somehow be incentivized to attack their data.

What does this have to do with data provenance? While the origin of Ceramic streams (even in the hypothetical situation of a stream with two divergent and conflicting updates) is at all times publicly verifiable, the potential for this type of attack has more to do with the validity of that stream’s data lineage (which is more concerned with tracking the history of data over time).

Portable

Finally, another important notion to consider in the context of data provenance and P2P software is replication and sharing. Developers looking to build on this class of data network should not only be concerned with how to verify and extract the origin of data from the protocol but also need assurances that the data they care about will be available in the first place.

ComposeDB presumes that developers will want options around the replication and composability of the data streams they will build on.

Node Sync

You’ll see on the Server Configurations page that there’s an option to deploy a ComposeDB node with historical sync turned on. When configured to the ‘off’ position, a given node can still write data to a model definition that already exists in the network, but the node will simply only index model instance documents written by that node. Conversely, when toggled ‘on’, this setting will sync data from other nodes and write data to a canonical model definition (or many). The latter enables the ‘composability’ factor that development teams can benefit from—this is the mechanism that allows teams to build applications on shared, user-controlled data.

Recon (Ceramic Improvement Proposal)

There is an active improvement proposal underway, called Recon, to improve the efficiency of the network. In short, development related to this proposal aims to streamline the underlying process by which nodes sync data, offering benefits such as significantly lifting the load off of nodes that are uninterested in a given stream set.

Trade-Off: Data Availability Considerations

Of course, the question of data portability and replication necessitates conversation around the persistence and availability of information developers care about. In Ceramic terms, developers can provide instructions to their node to explicitly host commits for a specific stream (called pinning), improving resiliency against data loss. However, developers should know that if only one IPFS node is pinning a given stream and it disappears or gets corrupted, the data within that stream will be lost. Additionally, if only one node is responsible for pinning a stream and it goes offline, that stream won’t be available for other nodes to consume (which is why it’s best practice to have multiple IPFS nodes running in different environments pinning the same streams).

Tutorial: Encrypted Data on ComposeDB

https://blog.ceramic.network/tutorial-encrypted-data-on-composedb/

Storing encrypted data that only certain users can access is an important and sometimes necessary feature for many Web3 applications. When it comes to ComposeDB on Ceramic, given that the underlying protocol entails an open and public network, any data streams can be accessed and read by any participating nodes.

In this tutorial, we will walk through one methodology you can use to encrypt data and decrypt data on ComposeDB. However, before we dive in, let’s first walk through some key concepts that will come into play.

What are DIDs?

DIDs are the W3C standard for Decentralized Identifiers. It specifies a general way of going from a string identifier, e.g. did:key:z6Mki..., to a DID document that contains public keys for signature verification and key exchange.

JOSE is a standard from IETF that stands for JSON Object Signing and Encryption, and that pretty much explains what it is. There are two main primitives in this standard: JWS (JSON Web Signatures) and JWE (JSON Web Encryption). Both of these formats allow for multiple participants: in JWS there can be one or multiple signatures over the payload, and in JWE there might be one or multiple recipients for the encrypted cleartext.

Building With Ceramic Libraries

Whether you need to authenticate users and create DID-based sessions, or you need to encrypt data to live on ComposeDB, we’ve created several libraries that can easily be accessed as node package dependencies. This tutorial will show you how to use these powerful tools directly.

key-did-provider-ed25519 offers a simple DID Provider that supports both signing and data encryption.

js-did is a library that allows developers to represent a user in the form of a DID. This is the main interface we’re going to be looking at in this tutorial. It allows us to sign data with the currently authenticated user, encrypt data to any user (DID), and decrypt data with the currently authenticated user.

@stablelib/sha256 is a library that allows developers to encrypt data and is a subset of the larger @stablelib collection of encryption algorithms.

Setup your environment

This tutorial will walk you through how to set up encryption capabilities within an existing repository. However, if you’re looking to start from scratch, we recommend exploring our Use Ceramic App starter.

Dependencies

  • MetaMask or another Ethereum wallet
  • Ability to successfully run local Ceramic and ComposeDB Clients
  • Scaffolding to allow for local runtime composite deployments when starting your ComposeDB node

Generate Encryption Key

In your application, prompt your users to generate a special DID to be used when encrypting messages or accessing encrypted data on ComposeDB. In order to do so, you will first define a userPrompt variable that will be viewable to the user as the message shown in their wallet when signing the request, while also generating consistent entropy used by the hashing algorithm (resulting in the seed defined below).

The seed is a secret that will then be used to define a new DID class instance (using the key-did-provider-ed25519 library). Finally, execute the authenticate() method on your new DID instance, returning the new DID to be saved as a session.

import * as u8a from 'uint8arrays'
import { hash } from '@stablelib/sha256'
import { Ed25519Provider } from 'key-did-provider-ed25519'
import KeyResolver from 'key-did-resolver'
import { DID } from 'dids'
/*
this prompt can be customized to whatever message you want to display for the
user. As long as the same message is signed, it should generate the same entropy
*/
const userPrompt = "Give this app permission to read or write your private data";
const accounts = await window.ethereum.request({ method: "eth_requestAccounts" });
const entropy = await window.ethereum.request({
  method: 'personal_sign', 
  params: [u8a.toString(u8a.fromString(userPrompt), 'base16'), accounts[0],
],
});
const seed = hash(u8a.fromString(entropy.slice(2), "base16"));
const encryptionDid = new DID({
  resolver: KeyResolver.getResolver(),
  provider: new Ed25519Provider(seed),
});
await encryptionDid.authenticate();
console.log('encryptionDid', encryptionDid.id);

Encrypting and Decrypting Data

For the purpose of this tutorial, we’ll be using a simple direct message data composite. Our top-level model will only make use of two fields—a “recipient” (DID value type), and a “directMessage” (a stringified JWE).

For the purposes of this tutorial, our simple message model would read as follows:

type EncryptedMessage @createModel(accountRelation: LIST, description: "A direct message encrypted data model") {
    recipient: DID!
    directMessage: String! @string(maxLength: 100000)
}

Make sure to compile and deploy your new composite on your Ceramic client, allowing you to mutate and query the models.

Encrypt & Write

If we want to encrypt data that only the currently authenticated user can decrypt we can simply encrypt it directly with the encryptionDid. In order to avoid escaping the JSON after stringifying your resulting JWE (preventing the data from conforming to the String constraint), we can temporarily replace double quotes with backticks.

Use your composeClient instance to perform a mutation using the values generated from the encrypted message:


const cleartext = 'this is a secret message';
const jwe = await encryptionDid.createDagJWE(cleartext, [encryptionDid.id]);
//stringify your JWE object and replace escape characters
const stringified = JSON.stringify(jwe).replace(/"/g,"`");
const message = await composeClient.executeQuery(`
    mutation {
    createEncryptedMessage(
      input: {
        content: {
          recipient: "${encryptionDid.id}"
          directMessage: "${stringified}"
        }
      }
    ) {
      document {
        recipient{
          id
        }
        directMessage
      }
    }
  }
`)

Read & Decrypt

Similarly, in order to allow only authenticated users to decrypt data, we can also allow access using their encryptionDid. The following example query obtains the first message and uses the authenticated user’s encryptionDid to decrypt the message:

const query = await composeClient.executeQuery(`
      query{
        encryptedMessageIndex(first:1){
          edges{
            node{
              recipient{
                id
              }
              directMessage
            }
          }
        }
      }
      `);
const arr = query.data?.encryptedMessageIndex?.edges;
//Reverse-replacement of backticks for double-quotes prior to parsing
const string = arr[0].node.directMessage.replace(/`/g,'"');
const plaintext = await encryptionDid.decryptDagJWE(JSON.parse(string));

Multiple User Encryption

In certain situations, you may choose to grant multiple users access to encrypting and decrypting data on ComposeDB. In this situation, two components of your data composite might be comprised of one model for storing encryption keys for each user.

Store Encryption Keys

Let’s say your data model for storing encryption keys reads as follows:

type PublicEncryptionDID @createModel(accountRelation: LIST, description: "A data model to store encryption DIDs for an application") {
    author: DID! @documentAccount
		publicEncryptionDID: DID!
}

And your simplified encrypted message model reads as follows:

type EncryptedMessage @createModel(accountRelation: LIST, description: "An encrypted message data model") {
    message: String! @string(maxLength: 100000)
    recipients: [DID] @list(maxLength: 200)
}

Hopping back to our application, as we authenticate users, we can store their corresponding encryption keys:

//the method below creates a session for your user and returns their DID.id
const auth = await encryptionDid.authenticate();
const storeEncryptionKey = await composeClient.executeQuery(`
    mutation {
      createPublicEncryptionDID(
        input: {
          content: {
            publicEncryptionDID:  "${auth}"
          }
        }
      ) {
        document {
          author{
            id
          }
          publicEncryptionDID  {
            id
          }
        }
      }
    }
  `);

Writing Group-Encrypted Data

Let’s say you’ve gated a segment of your audience based on their PKH DID CeramicAccount (explained below). This section is intended to show how to encrypt data that multiple users can decrypt.

Since anyone within the Ceramic network can write to any stream, you can filter based on which CeramicAccount authored each model instance. In our example below, we’ve displayed filtering by the did:pkh. This type of DID account allows interoperability between blockchain accounts and DIDs. Each time you are prompting your users to author mutations using an Ethereum wallet, the corresponding native Ceramic account is the PKH DID.

For the full specification, please reference the PKH DID Method specification.

The example below shows how to create an encrypted message only two separate users can decrypt:

//Query first user based on PKH DID
const query1 = await composeClient.executeQuery(`
    query {
      node(id: "did:pkh...") {
        ... on CeramicAccount {
          id
          publicEncryptionDidList(first: 1) {
            edges {
              node {
                publicEncryptionDID{
                  id
                }
              }
            }
          }
        } 
      }
    }
  `);
//Query second user based on PKH DID
const query2 = await composeClient.executeQuery(`
    query {
      node(id: "did:pkh...") {
        ... on CeramicAccount {
          id
          publicEncryptionDidList(first: 1) {
            edges {
              node {
                publicEncryptionDID{
                  id
                }
              }
            }
          }
        } 
      }
    }
  `);
const results = [...query1.data?.node?.publicEncryptionDidList?.edges, 
								 ...query2.data?.node?.publicEncryptionDidList?.edges];
const users = new Array();
results.forEach(el => {
  users.push(el.node.publicEncryptionDID.id);
})
const cleartext = 'this is a shared secret message';
const jwe = await encryptionDid.createDagJWE(cleartext, users);
//stringify your JWE object and replace escape characters
const stringified = JSON.stringify(jwe).replace(/"/g,"`");
const encryptedGroupMessage = await composeClient.executeQuery(`
  mutation {
    createEncryptedMessage(
      input: {
        content: {
          message: "${stringified}"
          recipients: 
            ["${users[0]}", "${users[1]}"]
        }
      }
    ) {
      document {
        message
      }
    }
  }
`);

Decoding Group-Encrypted Data

Regardless of how many encryption keys representing individual users are involved in generating a JWE, a single holder of just one of those encryption keys will still be able to decrypt the corresponding message in a similar fashion:

const query = await composeClient.executeQuery(`
    query{
      encryptedMessageIndex(first:1){
        edges{
          node{
            message
          }
        }
      }
    }
  `);
const arr = query.data?.encryptedMessageIndex?.edges;
//Reverse-replacement of backticks for double-quotes prior to parsing
const string = arr[0].node.message.replace(/`/g,'"');
const plaintext = await encryptionDid.decryptDagJWE(JSON.parse(string));

That’s it for this tutorial! Hope you enjoyed exploring one encryption methodology you can use, both when encrypting data on behalf of your users, as well as segmenting your application’s user base and allowing only those partitions the ability to decrypt relevant data.

If you want to get started with ComposeDB on Ceramic and don’t know where to start, dive into the ComposeDB Developer Docs, or walk through a detailed tutorial that features end-to-end steps using an article publishing platform as the example use case.