Enhancing Developer Experience on Corda 5: Empowering Efficient & Productive CorDapp Development


By: Ido Katz, Product Manager – Developer Experience at R3

The next generation of Corda represents a new era for R3 and its flagship distributed application platform Corda. R3’s Developer Experience (DevEx) team understands the importance of providing the best journey for CorDapp developers—a productive and efficient process including designing, coding, and testing. In this blog post, we will share what the DevEx team learned from Corda 4 user and customer feedback and give insights on team objectives and key deliverables.

People working on user journey map
Photo by UX Indonesia on Unsplash

Developing on Corda 4—Lessons Learnt

Product feedback is essential to product development to meet your user’s and customer’s needs. At R3, we have built a Corda 5 developer journey based on lessons learnt from our developer community, clients, and internal application development teams. Here are the main insights from Corda 4 developers:

  • Faster unit testing modification and automation capabilities
  • Easier and faster build process for bootstrapping local network with every change of code
  • Lower computing resources for flow testing on the local network
  • Higher compatibility for testing mid-flows (tests must start by issuing a new state)

Corda 5 is significantly different (and better) from Corda 4. Therefore, based on clients’ feedback and experience developing on Corda 4, R3’s DevEx team is building new set of tools and feature to improve CorDapp development process and environment.

R3’s DevEx—North Star

R3’s DevEx team is building a Corda 5 Developer Journey with a focus on three main pillars: Productivity, Ease-of-use, and Quality.

  • Productivity – Simplified Development process: Corda simplifies the development process with its Java-based API, java and Kotlin smart contract languages support, and CorDapp development training materials. We reduce the learning curve and enables developers to write secure and efficient code. Corda’s comprehensive documentation and tutorials facilitate quick onboarding, ensuring developers can easily start building DLT solutions from day one.
  • User Experience – Extensive Tooling Support: Corda provides various development tools, including Corda Standard Development Environment (CSDE), Gradle plugins, command-line interfaces, and debugging utilities. These tools streamline the development process and offer developers a seamless experience while working on Corda-based projects.
  • Quality – Local CorDapp Testing : Corda provides a robust testing framework that enables developers to write comprehensive unit, integration, and end-to-end (E2E) tests. This ensures reliability and resilience of developed CorDapps, reducing potential bugs and enhancing the overall quality of the codebase.
  • Productivity – Faster Release Cycles: Corda’s developer tooling and capabilities are being designed in faster timeframes than Corda Core Platform/APIs. Therefore, the DevEx team is publishing its tools and features in multiple release iterations (beta/ GA), with some content decoupled from the main releases (e.g., v5.0, v5.1, v5.2).

Corda 5—what’s ready and what’s still in the oven?

  • CSDE provides a modular local development and deployment to easily build and test your CorDapp. R3’s CSDE comes in two supported templates – Java and Kotlin, Gradle plugins for local deployment automation, and Combined Worker to emulate/mock Corda 5 microservices management.
    • To be improved: Combined Worker is slower than the real operational network. DevEx team is redesigning a new Combined Worker with improved throughput and higher architectural resembles to operative network. The new CSDE will be released with v5.1
  • Contracts Testing: A comprehensive suite of test APIs which implements the UTXO ledger, providing the ability to construct a transaction and assert that it verifies, or fails for a specific reason.
    • Release date: The Contracts Testing will be available for beta on GitHub by September 2023
  • Flows Testing: The ‘Driver’ is a local, in-memory version of Corda that can be launched using JUnit 5. The Flow Testing Tool allows developers to upload their CPBs (Corda Package Bundles) and debug their flows within Corda’s flow framework. The driver contains Corda functionality from the main codebase without needing either Kafka or REST.
    • Release date: The Driver has passed a PoC phase and is currently under development. A beta will be available in November 2023
  • CorDapp E2E Testing: Using the CSDE, Gradle plugins, and Rest API Swagger, a CorDapp developer can write E2E tests for their CorDapps.


Corda’s developer-friendly features, such as simplified development process, robust testing framework, modular architecture, seamless integration capabilities, collaborative eco-system, and extensive tooling support, make it an ideal platform for building private distributed ledger applications. By prioritizing developer experience, R3 empowers developers to focus on creating innovative solutions without unnecessary complexities, ultimately accelerating the adoption of blockchain technology across industries.

We are very excited about the general availability of Next-Gen Corda. It is a work in progress and DevEx team will continue to build tools and features to support our developer community.

To make Corda even better, we need your feedback to help us identify and resolve issues. Corda is freely available. So why not give it a try?

The post Enhancing Developer Experience on Corda 5: Empowering Efficient & Productive CorDapp Development appeared first on Corda.

Next-Gen Corda is Here!


By: Dries Samyn, Principal Software Engineer at R3

Next-Gen Corda is here

Next-Gen Corda in Numbers

  • 3 years in the making
  • 2 open-source repositories
  • Nearly 100 contributors
  • At least 3 R3 engineering babies conceived and successfully delivered (we take nurturing in-house talent very seriously!)
  • Over 1,500,000 lines of code added, deleted, or changed
  • Over 4000 pull requests
  • Over 5000 merged commits
  • Over 70,000 Jenkins builds
  • Over 500,000,000 test executions
  • 2 alpha releases
  • 4 beta releases
  • Exactly 3.56 liters of blood, sweat, and tears

These are just some of the statistics to give an indication of the scale of the work that has gone into building the next generation of the best Distributed Ledger Technology (DLT) platform in the world. All while continuing the development of Corda 4, with four releases during this period, to maintain its status as “the go-to DLT platform”.
In this blog, we will reflect on the journey from the perspective of an engineer.

Why Did We Do This?

I would like to say that it was for the love of solving really complex engineering problems, but it wasn’t. Of course we loved solving these complex problems, often getting things wrong a couple of times before we got it right, but there were more solid reasons that we undertook what has been the largest technical transformation of the Corda platform to date.

R3 has been working with customers using Corda from the early days of DLT adoption. We have seen innovative use cases and organizations solving real world problems, sometimes using Corda in ways we hadn’t originally envisaged. Some of those cases hit limitations that could not be facilitated by modifying the original architecture of Corda 4.

Along came “Project Starfish” and with it, the product team presented a vision for what the next generation of Corda needed to be. This vision would ensure that Corda was fit for the next generation of financial market infrastructure, digital assets, and digital currency applications, as well as the many other innovative use-cases of private, permissioned blockchain technology.

What Does Next-Gen Corda Look Like?

It’s still the Corda we know and love

Car getting engine replaced

Before describing what is new and how this has changed Corda, we want to establish that Next-Gen doesn’t mean it’s a different thing. We made some improvements to the CorDapp API that will hopefully help developers to write cleaner, more testable code, but this should not feel like a big change to existing CorDapp developers. We’re keeping the Corda Flow programming model that we know and love, the UTXO ledger is what we’re familiar with, and smart contracts still look pretty much like they did before.

The big technical transformation that has happened has been all about the internals. We’ve taken Corda and given it a big engine transplant.

Highly available (HA) and scalable

“Highly Available” was the leading requirement for Project Starfish. The types of applications that we now know are being built on the Corda platform have challenging requirements around uptime and scalability. This was probably the biggest driver behind the need to re-imaging some of Corda’s foundations.

Diagram showing comparison of Corda 4 and Next-Gen Corda

Previously, a Corda node was supported by a monolithic JVM process. This means that it’s quite easy to manage and it performs well. However, there are limits to how much it can scale. As one node was one process, scaling a Corda node meant finding a bigger JVM/Host.
This also means that achieving a highly available configuration was difficult, as only one process can support a node at any time. There is limited support for a hot or warm topology where you can have a second instance on standby to take over in case of failure.

Next-Gen Corda has taken on a distributed architecture designed around stateless worker processes that can scale up or down and provide redundancy.


Diagram showing multiple Corda clusters

Of course, Corda is still a distributed system, however, what we see is that users often operate multiple Corda nodes. Either because have multiple segregated identities on a network, or because it is part of a progressive decentralization model where multiple nodes are operated on behalf of others until they are ready to manage their own.

This is the reason that we separated the concept of the Corda node from the compute resource. In Next-Gen Corda, we call them Virtual Nodes and they are simply execution contexts in which worker processes within a Corda cluster can operate.

To see this in action, you can easily set up a Corda network with dozens or even hundreds of nodes on your laptop, or local development environment.

Network of networks

We believe in the value of private, permissioned, networks, but that doesn’t mean that we don’t believe in interoperability between these networks. Being able to complete an atomic Delivery vs Payment (DvP) transaction across Corda networks, or even between Corda and other blockchains such as Ethereum, is something that our customers and partners in the ecosystem want to do. This is why Interoperability was such an important requirement for Next-Gen Corda. We have already announced and launched Hyperledger Lab Harmonia – watch this space for more announcements.


You interact with the Next-Gen Corda using a standard REST interface over https. This makes it straight forward to integrate into any kind of application or workflow. Corda no longer takes any opinion of what language or framework that you use to write your client app.

Manage network membership through the new Membership Group Manager (MGM)

Next-Gen Corda introduces the Membership Group Manager (MGM). Its role is to register and manage members of the network. This registration process can be automatic, or custom registration logic can be integrated with your own workflow through integration with the REST API.

New peer-to-peer communications layer

P2P (peer-to-peer) communications in Next-Gen Corda uses a standard HTTP transport, making it easier for people to connect their Corda installations to others in the network. The layer was designed and built as an independent component, which was released as an open source preview just over a year ago.

Pluggable ledger / pluggable notary

The UTXO ledger and notary in Next-Gen Corda will be familiar to those who are already familiar with Corda. However, similar to the P2P layer, the ledger and notary are designed and built as layers on the top of the stack. They are independent of the Corda flow model and this enables adding alternative ledger models in the future, or in the case of the notary, custom implementations can be supported by Network Operators.

Operator Experience

We recognized that most of our customers wanted to host Corda in a containerized environment, typically in the cloud. We also recognized that Kubernetes is a good match for the new Corda architecture.

You don’t have to operate Next-Gen Corda in Kubernetes or the Cloud, but when you do, we’ll make your life as a platform operator easy, since Corda is designed to be operated in such environments.

Developers Developers Developers

Engineers of my vintage will probably remember the day in the early 2000s, when Steve Ballmer turned into a meme, but wasn’t he right about developers? He thought that the success of Microsoft was dependent on a large ecosystem of applications, supported by developers developing for the platform.

We want CorDapp developers to love the platform as much as we do and the “CorDapp Developer” persona has featured high in the Next-Gen Corda user stories. This means that we’ve introduced a REST API for easy integration and we’ve improved the CorDapp API so that it is now cleaner and more testable. We’re also giving CorDapp developers a way of running an entire Corda Network in a single process, easily runnable from a local development environment.

And this is just the start, because we have some very exciting developments in the pipeline – so watch this space!

What’s Next?

Road with the word START written on it

After reflecting over the journey of the last couple of years, it feels like we’ve reached a destination, but as my colleague and great mentor Katelyn Baker reminded me numerous times over the last 2 years, releasing Corda 5.0 is just the start of the road. We have built the foundations for a Corda that is ready to support the next generation of distributed applications at scale.

Yes, we will be improving and fine-tuning those foundations in the next few months, but mostly, we’ll be building on top of those foundations to help our customers and users to innovate.

For now though, we’re due a moment to reflect and recharge, but we hope that you will be busy testing the new platform and giving us the feedback that we need to make Corda even better.

Next-Gen Corda is available today. Download the Corda artifacts, learn more in the Corda docs, and start building for free.

The post Next-Gen Corda is Here! appeared first on Corda.

Notaries in Corda 5: Pluggable Notary Protocols


By: Ramzi El-Yafi, Staff Software Engineer at R3

In a previous blog post, we looked at the Notary Architecture in Corda 5 as a whole. As part of this, we briefly mentioned that notary functionality is provided in the form of “plugin” CorDapps. Since then, we have added official documentation to explain the implications of this architecture on CorDapp Developers and Network Operators. This blog post provides some more background as to why we have adopted this model. It also poses some questions that both CorDapp Developers and Network Operators need to ask as part of the development and operational lifecycle.

Photo by Emre Turkan on Unsplash
Photo by Emre Turkan on Unsplash

Plugin Philosophy

In Corda 4, notary functionality is fixed function. When specifying a notary in the configuration, a single setting (the validating flag) indicates whether the notary supports the non-validating or validating notarization protocol. Under the hood, these are both sub-flows that execute as part of transaction finality. They share much of the same code, specifically around uniqueness checking. The non-validating notary plugin performs only minimal additional checks beyond uniqueness checking, whilst the validating notary plugin performs full contract validation against states in a transaction and all of the relevant back-chain.

Ultimately, the distinctiveness between these protocols is entirely within flow logic. This led to the realization that we had implemented fixed-function logic, whereas a solution that implemented the flows in CorDapps would provide superior flexibility. As a result of this, the decision was taken to implement notarization flows in CorDapps, whilst making a separate “uniqueness service” available to these flows to perform fixed-function double-spend prevention checks. This provides flexibility to add additional notary protocols in future, either provided by R3, or a third-party developer.


Notary plugin CorDapps must provide an initiator and corresponding responder flow. These flows are slightly special, in that they are not directly invokable from the flow REST API. Instead, they are intended to be invoked as sub-flows, as part of transaction finality. Each notary service on the network will have a named protocol that it supports, defined in its Membership Group Manager (MGM) metadata. This corresponds to the protocol name of an initiating flow within the notary plugin, which is automatically invoked as a sub-flow during transaction finality. Whilst not required, it is recommended that a notary plugin is split into three distinct Corda Packages (CPKs):

  • A client CPK, which is included as part of an “application” Corda Package Bundle (CPB), installed on standard virtual nodes on the network. This contains the initiating flow.
  • A server CPK, which is included as part of a “notary” CPB, installed on notary virtual nodes on the network. This contains the responder flow.
  • An API CPK, which is included on both sides and is used to define the message payload between the initiator and responder flow.

This diagram outlines the structure of the non-validating notary plugin, which is shipped as part of the Corda 5.0 release:

Diagram outlining the structure of the non-validating notary plugin
Diagram outlining the structure of the non-validating notary plugin

This structure allows for the proper separation of responsibilities. Only notary virtual nodes have the responder flow installed and with it, the ability to respond to notarization requests. Similarly, only application virtual nodes have the ability to initiate notarization requests.


Lets now consider the implications of this architecture from the perspective of different personas.

CorDapp Developers

The biggest impact of this architecture is arguably on CorDapp Developers. Since application CPBs are produced by CorDapp Developers, they decide which notary protocols that their CorDapp will support and it is they who ensure the relevant notary plugin CPKs are bundled as part of their application CPB. Right now, this decision is trivial since R3 only provides a non-validating notary plugin, so the decision comes down to whether their CorDapp utilizes the UTXO ledger. If it does, this requires notarization and therefore the relevant non-validating notary CPKs must be included. Alternatively, if their CorDapp does not utilize the UTXO ledger (for example if the CorDapp only uses flows and not a ledger), then there is no need to include any notary protocol.

Things may get more interesting in future as more notary protocols are added. The CorDapp Developer does not know anything about the network that their CorDapp will be deployed on, so they must bundle all appropriate plugins that are supported by their CorDapp. Conversely, it may be that their CorDapp needs a specialized notary protocol, written either by themselves or by another entity. In this scenario, they may forgo bundling the “standard” notarization protocols and only bundle the specialized protocol.

Notary Plugin Developers

The packaging decisions for Notary Plugin Developers are not so clear. One or more CPKs must be produced to support a custom notarization protocol. It’s also clear that there should be a separate notary CPB produced for installation on notary virtual nodes.

For the non-validating notary protocol, R3 produces and distributes a standard notary server CPB for this purpose. We are able to do this because the protocol is simple and requires no third-party CPKs to operate. However, we can imagine a situation where this is not possible. For example, if we were to re-implement the validating notary protocol from Corda 4, which performs validation against the contracts of states in a transaction during notarization, this would require the notary server CPB to contain a CPK from the “application”, because it would need a custom set of contracts to verify against. In this situation, the Plugin Developer could only produce CPKs, and would require the CorDapp Developer to build a notary and an application CPB.

Network Operators

Network Operators must decide which notarization protocols that their network will support and build one or more notary server Corda Package Installers (CPIs) from provided CPBs. These CPIs must then be installed on the relevant notary virtual nodes. Again, this is trivial whilst there is a single supported notary protocol, but this may become more involved if the network supports multiple notary protocols (and by extension, supports multiple notary services).


Whilst the changes to notarization protocols in Corda 5 require CorDapp Developers and Network Operators to think about which protocols that they will support, the Corda 5.0 release represents the beginning of a journey to develop the functional capabilities of the notary. This approach provides an extensible framework that allows us to easily add additional notarization capabilities throughout the life of the Corda 5 platform.

Corda 5 Beta 4 is available today. Download the Corda artifacts, learn more in the Corda docs, and start building for free.

The post Notaries in Corda 5: Pluggable Notary Protocols appeared first on Corda.

Next-Gen Corda 101 Part 1 – Key Concepts


In this video, we will show you the key concepts of Next-Gen Corda. Learn about its new architecture, including application networks, workers, and virtual nodes. In second part of Next-Gen Corda 101, we will demonstrate how to develop a simple Token CorDapp and how to deploy a local Corda Cluster.

The post Next-Gen Corda 101 Part 1 – Key Concepts appeared first on Corda.

Next-Gen Corda 101 Part 2 – Developing a CorDapp


In the second part of Next-Gen Corda 101 we will demonstrate how to run a simple Next-Gen CorDapp. For this demo, we will use CSDE (CorDapp Standard Development Environment) which makes the process of prototyping CorDapps more straight-forward. We will deploy a local Corda Cluster with combined worker. We will also build a Corda Package (CPK), Corda Package Bundle (CPB), and Corda Package Installer (CPI) file. We will then install a CPI on the Corda Cluster. We will also create a virtual node and register it with the Membership Group Manager (MGM). Finally, we will run a flow and print the output.

Link to the Token CorDapp: https://github.com/corda/corda5-samples/tree/bootcamp-cordapp/java-samples/bootcamp-cordapp

The post Next-Gen Corda 101 Part 2 – Developing a CorDapp appeared first on Corda.

Zero to Corda 5 in 10 minutes or less


By: David Currie – Principal Software Engineer at R3

As the release of Corda 5 gets steadily closer, it has never been easier to get a Corda 5 cluster up and running on Kubernetes. In doing so, you can see for yourself the new distributed worker architecture covered in James Higgs’ post on high availability in Corda 5. In this blog post, we’ll describe one method that should take less than 10 minutes. This assumes that you already have a Kubernetes cluster but don’t worry if you don’t, we can help you there too. The post will also give you an update on support for ARM that Simon covered in a previous post.

Before we get started though, a reminder that you don’t need to deploy Corda 5 to Kubernetes in order to write and test a CorDapp. The CorDapp Standard Development Environment allows you to run Corda as a single-process JVM (plus Postgres database) for testing your applications. The approach described in this blog is only if you want to see what a multi-process deployment of Corda on Kubernetes looks like.

Photo by Joseph Barrientos on Unsplash

Kubernetes Prerequisites

To start with, we need a Kubernetes cluster to deploy to, and the kubectl and helm command line tools.

The only requirements for the Kubernetes cluster are that it is running Kubernetes 1.23 or later and, for this deployment, has at least 6 CPUs and 8Gi RAM. Beyond that, it could be a single-node cluster running on your laptop (for example Docker Desktop, minikube, k3d, microk8s, or kind), or it could be a multi-node cluster in your favourite cloud provider (we currently test on AWS and Azure).

If you don’t already have a Kubernetes cluster, then minikube is a good option for deployment on your local machine with platform coverage for Linux, Windows, and macOS across both AMD and ARM. Follow the instructions to download/install minikube and then start a cluster as follows:

minikube start --cpus=6 --memory=8G

You’ll also need the Kubernetes CLI, kubectl. You can either follow the install instructions or, if you’re using minikube, you can have it pull the correct version for you by setting up the following alias:

alias kubectl="minikube kubectl --"

Lastly, as covered in a previous blog post, we use the Helm package manager to deploy Corda. Follow the install instructions for the helm CLI if you don’t already have it. You should ensure that you have Helm 3.9.4 or newer:

helm version

You now have an empty Kubernetes cluster and the tools necessary to install Corda and its prerequisites:

Install Postgres and Kafka

Corda 5 uses Postgres (to store state and configuration) and Kafka (as a messaging bus for communication within the cluster). These prerequisites could be met by a managed service, such as RDS for Postgres in AWS, and MSK or Confluent Cloud for Kafka. If you don’t have previous experience running Postgres or Kafka, we’d highly recommend this approach for production deployments. For the purposes of this post though, we’re going to deploy Postgres and Kafka to run on Kubernetes as a universal approach that doesn’t introduce any additional cloud spend.

In his blog post on ARM support, Simon Johnson covered an issue with the Bitnami Helm charts that we were using to deploy Postgres and Kafka. The Docker images they deployed were not built to run on ARM. As a consequence, we created a new Helm chart to deploy a minimal configuration of Postgres and Kafka for development use that runs the official Docker image for Postgres and the Kafka image from Confluent, both of which are built for AMD and ARM. That corda-dev-prereqs Helm chart is packaged up and made available via Docker Hub.

Assuming that your current Kubernetes context is targeting the cluster you wish to deploy to, you can create a namespace called corda and deploy Postgres and Kafka into it using the following command:

helm install prereqs --namespace corda --create-namespace 
  --timeout 10m --wait

The ten-minute timeout is only to give it sufficient time to pull down the container images from Docker Hub on a slow connection. Once pulled, it should be nearer to ten seconds for the containers to actually start and reach the ready state.

You now have Kafka and Postgres installed in your Kubernetes cluster:

After we had created this chart, Bitnami announced that they had started building images for ARM. We’re still going to continue using our new corda-dev-prereqs chart for development use as it has a couple of additional benefits: it uses Kafka KRaft so there is no need to run ZooKeeper and it generates credentials for multiple users in a way that they can be directly consumed by the Corda Helm chart.

Install Corda

With the groundwork all in place, installing Corda itself is just a single command:

helm install corda oci://registry-1.docker.io/corda/corda 
  --version 5.0.0-Gecko1.0 --namespace corda 
  --values https://gist.githubusercontent.com/davidcurrie/e9c090bdee99ea0a8412fc228218a0e0/raw/723a4ad8886853b07339288c85b86ef8fcb57c1e/corda-prereqs.yaml 
  --timeout 10m --wait

This installs the latest version of Corda that was available at the time of writing: Beta 2. We extend the default timeout again as this time not only do we have to pull the Docker images, but the installation also sets up the Kafka topics and the Postgres schema/tables before starting the Corda workers and performs some final configuration of the Corda RBAC roles.

If everything worked successfully, you should see three completed jobs and a set of Corda workers in ready state, for example:

$ kubectl get pods --namespace corda
NAME                                             READY   STATUS      RESTARTS   AGE
corda-create-topics-fv2cx                        0/1     Completed   0          4m50s
corda-crypto-worker-68947f88-84nfg               1/1     Running     0          3m35s
corda-db-worker-84fbbf9b78-nvsp5                 1/1     Running     0          3m35s
corda-flow-worker-8686f7b59b-6lkn7               1/1     Running     0          3m35s
corda-membership-worker-5d7c69f996-pdw96         1/1     Running     0          3m34s
corda-p2p-gateway-worker-76787cb8c9-2866b        1/1     Running     0          3m34s
corda-p2p-link-manager-worker-56c9d5df97-wm9qr   1/1     Running     0          3m35s
corda-rest-worker-c89975c9b-9z58k                1/1     Running     0          3m35s
corda-setup-db-g4v7c                             0/1     Completed   0          4m13s
corda-setup-rbac-v9l2s                           0/3     Completed   0          102s
prereqs-kafka-68d9cc968c-45cpl                   1/1     Running     0          6m21s
prereqs-postgres-86b895b786-cg56w                1/1     Running     0          6m21s

If you don’t see the Corda workers running, take a look at the Cluster Health section of the Corda documentation for suggestions on how to troubleshoot common problems.

For those interested in the details, the corda-prereqs.yaml gist contains the information needed to tie together the Corda installation with the Kafka and Postgres installation in the previous step. For example, the server addresses for Kafka and Postgres, along with the details of the secrets containing the credentials and certificates. Given that we’ve only configured a single Kafka broker, it also ensures Corda only attempts to create a single replica for each topic. Lastly, it switches the default JSON format logging to text format which is more readable if you’re not pushing the logs into a logging stack. For an overview of some of the more commonly used overrides, see the documentation.

You now have Corda up and running on Kubernetes, using Kafka and Postgres instances also deployed to the same Kubernetes cluster:

Verify Access to the Corda REST API

At this point, you have a fully functional Corda cluster and can go on to use the Corda REST API to install CorDapps, create virtual nodes, set up networks, and even peer together multiple clusters. That’s all beyond the scope of this post but, just to give you a warm fuzzy feeling that we have a working REST API, here’s a small snippet to test it out:

kubectl port-forward --namespace corda deployment/corda-rest-worker 8888 &
CORDA_USERNAME=$(kubectl get secret corda-initial-admin-user 
  --namespace corda -o go-template='{{ .data.username | base64decode }}')
CORDA_PASSWORD=$(kubectl get secret corda-initial-admin-user 
  --namespace corda -o go-template='{{ .data.password | base64decode }}')

If you get Hello World! (from admin) back, congratulations, you have a working cluster! You’re good to start exploring the other operations available to you in the Corda documentation.

Wrapping Up

Finally, when you’re all done trying it out, because we installed everything into a single Kubernetes namespace, you can remove it all with a single command:

kubectl delete namespace corda

There was a lot of text above but getting to a running Corda really did just involve running two commands: one to install Postgres and Kafka, the other to install Corda and, even if it took you more than ten minutes the first time through, I’m positive it will be less on the second attempt!

The post Zero to Corda 5 in 10 minutes or less appeared first on Corda.

Build/DevOps @ R3 – Operation Automation


By: Maciej Swierad – DevOps Engineer at R3

I, for one, welcome our new robot overlords. For they are the future, and the future is automated.
George Devol

Have you ever wondered how software actually makes its way into the real world? How it’s published and distributed? How it’s built and secured? Have you thought about how the sysAdmin role has been retired and moulded into DevOps and what that means? Well have I got the post for you!  

Settle down, take a cup of tea, and bear with me as I talk you through build engineering and DevOps while tossing in my grad perspective at what it all is. However, if you’re a savvy build engineer already, or simply like seeing real world examples, just jump ahead to the heading “Real World example – GitHub repo creation automation”. 

Photo by Xavi Cabrera on Unsplash

Why We Automate

Automation helps reduce the time and effort required to build and deploy software, which improves the overall efficiency of the software development process. This saves time and resources and allows teams to focus on other important tasks. In an ideal world, a software release is just one click away!  

Automation helps ensure that the build and deployment process is consistent and repeatable, which improves the reliability and stability of the software. This is especially important in a cloud native environment, where the software may be deployed across multiple cloud environments or services. This is often a security requirement too. Think about it: what happens if you’re a financial entity using a product which has been sent to you, but the sender does not have a reliable and repeatable build system, and therefore the product you received is not the exact same as what was tested by QA and signed? Suddenly, you’re looking at a costly mistake. Fret not however; good build automation will save you from such an endeavour.  

We employ the use of infrastructure as code (IAC) and configuration as code (CasC) to manage and deploy the infrastructure and configuration of the software in a consistent, automated way. This ensures that the environments used to build, test, and deploy the software are replicable, meaning that they can be deployed consistently and reliably in any other environment with the same configuration and settings. 

What is build engineering/DevOps in Build?

One of the most common problems in software engineering is how to ensure your software is delivered in a reliable manner. You want to prevent issues as described above, to take the weight of building and deploying away from your engineers, and to adhere to best security practices.  

That’s where build engineers step in! Build engineers are responsible for creating and maintaining the processes and tools that are used to build, test, and deploy software applications. This involves writing scripts and automation tools to automate the build and deployment process, setting up and configuring continuous integration and continuous delivery (CI/CD) systems, and working with other teams to ensure that the software build process is efficient and reliable. DevOps engineers within the build team at R3 are also responsible for the smooth deployment and maintenance of the services used by developers such as Artifactory, Jenkins instances, and whatever else may be needed.  

One of the most common areas of attack of a company is the build pipeline, the CI/CD system. If we look at the recent incident where a American government’s no-fly list was leaked, this was done through a vulnerable Jenkins instance. Someone at a major US airline was running a Jenkins instance with anonymous admin access;, all the malicious actor had to do was scrape the credentials and then they had all they needed to access the no-fly list.  

One of the jobs we are tasked with as DevOps engineers in R3 is to make sure that can’t happen. We need to make sure our pipelines are secure, locked behind VPNs, credentials obfuscated, amongst other security practices. We must do this while also still focusing on continuous development. In my eyes, having become a DevOps engineer relatively recently, our greatest strength is versatility.

Infrastructure/Configuration as code

In build engineering and DevOps, automation is the key to improving the reliability and efficiency of the build and deployment process. There are a variety of tools and techniques used to automate various aspects of the process, such as provisioning infrastructure, configuring applications, and deploying code. Some common tools and techniques for automating build and DevOps processes include: 

  • Infrastructure as code (IaC) tools, such as Terraform, are used to define and manage your cloud infrastructure in a declarative and version-controlled configuration file. 
  • Configuration management tools, such as Ansible and Puppet, are used to define and manage the configuration of your applications and services in a declarative and version-controlled playbook. 
  • Container orchestration tools, such as Kubernetes, are used to define and manage your containerized applications and their dependencies in a declarative and version-controlled configuration file. 

By using these tools and techniques, build and DevOps engineers can automate many of the repetitive and error-prone tasks involved in building and deploying applications, and can focus on delivering value to their users.

Real World example – GitHub repo creation automation

When I came into R3, I had an idea of an ideal world where repositories are created automatically based on service desk tickets and managed automatically with Terraform. 

In my mind this would ensure proper etiquette; such as branch protection, required reviewers, CODEOWNERS, PR gates etc, is kept everywhere. When I brought this idea forward to my manager, it turned out it was something they had thought of but never implemented company-wide. And so, it ended up being one of the cooler projects I’ve recently worked on.  

With the use of Terraform, Atlantis, and Jenkins, we implemented a system where once a user enters a ticket to request a new repository, the user responding to the ticket kicks off a Jenkins pipeline that creates a JSON file with the repo details, pushes this file to GitHub and opens a pull request. Atlantis then picks up the pull request and runs its plan; essentially just a Terraform plan. If we are happy with the proposed outcome of the plan, then the plan is applied, and the repository is created. 

With this approach we can create template files with the github_repository_file resource.

resource "github_repository_file" "github_repositories" { 
  for_each = { 
    for item in flatten([ 
      for repo in github_repository.github_repositories : [ 
        for key, value in local.standard_github_repository_files : { 
          repository : repo.name 
          branch : repo.default_branch 
          file : value.file 
          content : value.content 
          commit_message : "Adding basic ${key}" 
      ] if !repo.archived 
    ]) : "${item.repository}:${item.file}" => item 
  repository          = each.value.repository 
  branch              = each.value.branch 
  file                = each.value.file 
  content             = file(each.value.content) 
  commit_message      = each.value.commit_message 
  overwrite_on_create = false 
  lifecycle { ignore_changes = all } 

GitHub Automation workflow

To create these files, define their location in the local.tf file. The process iterates through each referenced file, copying its content and creating the file in the repository.  

For the keen-eyed out there, you may have noticed in the above code that lifecycle is set to ignore all changes. When we first implemented this automation, we not only created repositories but also tried to manage repositories. This proved problematic to say the least. The unknown factor that is “human interaction” proved to be detrimental to managing repositories! People would make changes that may have seemed trivial such as changing the default branch name. However, when terraform checked its state, suddenly, the branch protection seemed to be gone, the branch seemed to have been deleted, or other problems occurred. In short, Terraform couldn’t handle changes made outside of the state. (And for anyone who has had the displeasure of manually fixing Terraform state manually you have my condolences.) 

After much deliberation, we decided, for all resources in the Terraform deployment, to ignore changes. Unfortunately, managing is not something that should be done automatically – yet! Instead, we now focus on creating repositories based on a template as described above. 

My future work on this project will include creating a coherent set of security levels per repository and enforcing management through a non-automated repository. Pro-tip: if you manage more than a small number of repositories with a system like this, it ends up taking quite a while to check each repository’s state.


In conclusion, in this new world of DevOps and continuous development and integration not only are build engineers and DevOps engineers a necessity but so is automation! The more human error that can be removed from day-to-day tasks the better the development lifecycle becomes.  

I look forward to continuing my journey down this path, and hope to see you back for my next appearance!

Learn more about the deployment of Corda in this blog post.

The post Build/DevOps @ R3 – Operation Automation appeared first on Corda.

Sandboxes in Corda 5 — Java Security Manager


By: Miljenko Brkic, Principal Software Engineer at R3

This article is a follow-up to the Corda 5 Sandboxes blog post. It will explore in more detail how Corda 5 uses Java Security Manager to secure sandboxes.

Corda 5 supports multiple virtual nodes sharing a single Corda installation and multiple CorDapps (Corda distributed applications) running within the same Corda (JVM) process. This introduces the risk of one CorDapp interacting with another or with the Corda platform. That interaction could be unintended, like a simple dependency clash, but it could also be malicious. Sandbox is an execution environment within a JVM process that provides isolation for a CorDapp. It shields it from outside threats but it also restricts what it can do so that running potentially dangerous code cannot harm others.

What is Java Security Manager?

Java Security Manager protects applications against threats posed by running untrusted code. It was originally designed to protect users from potentially malicious Java applets downloaded from websites, which was similar to the threat that web browsers face today running JavaScript. While CorDapp writers trust their own code, they can’t trust CorDapps written by others, so Corda, as an application hosting platform, needs to treat CorDapps as untrusted code that could be malicious.

Security Manager acts as a gatekeeper, controlling access to sensitive resources (for example, accessing your local disk or local network) and ensuring that malicious code cannot compromise the security of the system. It enables administrators to define and enforce security policies within the JVM hosting Java applications. Administrators can specify exactly which resources an application can access and which actions it’s permitted to perform. For example, an application might be granted permission to read a file, but not to write to it or delete it.

Security Manager key concepts

Security Manager permits access to sensitive resources. It is not enabled by default; instead, it has to be explicitly enabled (for example, using the −Djava.security.manager option). There can be only one global instance of Security Manager and it’s accessible using System.getSecurityManager(). When some running code calls a method on the Java API, requesting an operation be performed, the JVM will check with the security manager if this is allowed. An example of this would be the ability for a Java application to open an external connection over HTTP (such as connecting to a web host). If the administrator of the JVM has denied that ability, the application will receive an error. This introduces a separation between what an application is allowed to do at runtime vs at compile time, meaning an application may work within a less constrained environment and fail in a more constrained one.

Access controller provides the basis of the default implementation of the Security manager.

Code source is the location from which the class was obtained (for example, a web URL or JAR file).

Permission represents access to a protected resource.

Protection domain encapsulates the characteristics of a domain, which encloses a code source and a set of permissions granted to it.

Security policy provides management of permissions in a configurable way. The default implementation uses text files, where each text block in a policy file defines a Protection Domain:

grant signedBy "r3", codeBase "http://www.r3.com/" {                // Code Source
    permission java.io.FilePermission "/tmp", "read";               // Permissions
    permission java.net.SocketPermission "*:1024−", "connect";

How Security Manager checks permissions

Let’s take an example and see how Security Manager checks permissions. Consider a CorDapp that wants to read a file from a filesystem:

// CorDapp
public readFile(String fileName) {
    InputStream inStream = new FileInputStream(fileName);

It does this by executing the readFile() method that creates a Java API’s FileInputStream for reading. The constructor of FileInputStream will first check if there is a Security Manager; if there is, it will call its checkReadmethod:

// FileInputStream
public FileInputStream(File file) throws FileNotFoundException {
    String name = (file != null ? file.getPath() : null);
    SecurityManager security = System.getSecurityManager();
    if (security != null) {
    if (name == null) {
        throw new NullPointerException();

Security Manager will then delegate permission checking to method checkPermission of the Access Controller:

// SecurityManager
public void checkRead(String file) {
    checkPermission(new FilePermission(file,
public void checkPermission(Permission perm) {

The diagram below shows all these methods on the call stack (on the left) and also related Protection Domains (on the right):

Diagram showing all methods on the call stack (on the left) and also related Protection Domains (on the right).

In order to determine whether this sensitive operation is allowed, Access Controller collects all Protection Domains related to the classes on the call stack. The operation is allowed only if all protection domains have the permission to perform it; otherwise, a SecurityException is thrown. System Domain has all permissions and permissions of other domains are defined via Security Policy.

There is an exception to this, which will be explained using another example. As an application platform, Corda also provides a number of injectable services to CorDapps. One of them is the JSON Marshalling Service, which depends on a third-party library that uses reflection. This means that a CorDapp should have reflection permission in order to use it. However, we want untrusted CorDapps to be able to use trusted platform services without the need to have special permissions.

That can be accomplished with a privileged caller that can perform a sensitive operation on behalf of class that doesn’t have the required permission. Access Controller provides the doPrivileged() method and when this method is found on the stack, it stops the further checking of protection domains.

The snippet below shows how the Corda JSON Marshalling Service uses doPrivileged() to enable CorDapps to serialize classes to JSON, without the need to have the required reflection permissions:

// JsonMarshallingServiceImpl
override fun format(data: Any): String {
      return try {
          AccessController.doPrivileged(PrivilegedExceptionAction {
      } catch (e: PrivilegedActionException) {
          throw e.exception

Security Manager and OSGi

CorDapps running within sandboxes need to be isolated from each other. Classes used in specific CorDapps shouldn’t be visible to others. This is achieved using the OSGi (Open Service Gateway Initiative), which is a framework for developing and deploying modular Java applications. A unit of modularization” is called a bundle and is a Java archive file (JAR) that additionally contains a manifest that describes it and its dependencies.

The OSGi Security Layer is based on Java security architecture. In OSGi, Protection Domain is mapped to a bundle. A bundle’s permissions are handled through Conditional Permission Admin. OSGi adds several features to the Java security, including:

  • OSGi specific permissions
  • Conditional permission management
  • Deny-access decisions

The Corda Security Manager

Corda’s Security Manager adds permissions to Conditional Permissions Admin’s permission table using the bundle location as a condition. Bundles loaded for the specific sandbox type share the same location prefix, so this enables the application of different sets of permissions to different types of sandboxes.

Disabling access to internal platform classes

CorDapps have access to platform services via interfaces. Granting them reflection permissions could lead to various malicious exploits of platform classes that implement those services and compromise a sandbox. However, a CorDapp may depend on some public libraries that use reflection, and unfortunately, most public libraries don’t use reflection in a privileged call. That means that it’s not enough to grant reflection permissions only to the library: the CorDapp needs it as well. So the desired solution is to allow reflection but not over Corda internal packages.

That goal is achieved by allowing reflection but denying access to internal packages by using the accessClassInPackage permission. The Security Manager will check this permission only if a package name starts with a prefix defined for a security property package.access. This property is defined in the java.security file distributed with JVM (which can be replaced or overridden with a custom file) and can also be set with Security.setProperty().

Configuring security permissions

One of the best security practices is the principle of least privilege. It is a security concept of providing no more permissions than is necessary to perform the required job. Corda follows this principle, which means that CorDapps won’t have any permission granted by default.

However, some CorDapps might need certain permissions granted in order to do their job; for example, make HTTP requests to external services. Configuring permissions can be complex and requires a good understanding of Java security concepts. It’s also a very delicate task since granting permissions lowers sandbox security. For this reason, Corda security permissions are managed by Corda administrators via the configuration of security policy.

Corda security policies

Corda comes with a few predefined security profiles that can be used as provided or customized for specific needs. The strictest policy is applied by default, but a Corda administrator can override this policy if required.

Policies can have “allow” and “deny” access blocks. Each block starts with a condition that needs to be satisfied in order to apply that block. After that is a list of permissions that are either “allowed” or “denied” based on the block type.

Snippet below shows one deny-access block for Flow sandbox:

[org.osgi.service.condpermadmin.BundleLocationCondition "FLOW/*"]
(java.io.FilePermission "<


This article provided more details about Security Manager and how it is used in Corda 5 to securely run user code within sandboxes. Security of sandboxes also relies on OSGi that is used to load bundles and control visibility between them. We will follow up with another article about how we use OSGi in Corda 5.

The post Sandboxes in Corda 5 — Java Security Manager appeared first on Corda.

Being Non-Technical in a Technical Department


By: Steph Paine

The opportunity of moving to a new department is often an exciting thought for many, but for me it came with a level of doubt. Business Resources, the central admin and support team to the Engineering team is like going from a small, fun outgoing team to what is seen by many from the outside, a large super smart, tech focused introverted closed division of the business.

How to make a difference

The Engineering team is made up of over 150 people but unfortunately a less than 20% female representation, a department full of extremely intelligent and gifted people, what did I have to offer this team? I know very little of coding, nodes, Kubernetes, clusters; but I am a practical, organized individual, I understand people, and this is what I was told that I could bring to the role and the department. This was an exciting opportunity to grow, to learn but also to make a tangible difference.

Looking at the job description was a bit daunting – the importance of the need to drive and improve engagement and development of teams and people was evident. The Corda business unit (BU) had recently been a victim of the ‘tech boom’, lots of people had left to explore new ventures and opportunities and those remaining were feeling the pressure. 

Overcoming challenges and knowing how to contribute

Six months in and I really don’t know what I was worried about. I have had to overcome some challenges, the biggest one was creating myself work. In my previous role the work had always come to me, I was dealing with requests and reacting to situations. In this role there was of course tasks that end up in your lap but a lot of it has been seeing an issue and creating yourself a project and executing a solution. This is a different way of working for me but something that I think I have excelled at.

Yes, sometimes I sit in on meetings, and I can get lost but it’s okay, there is no better way to learn than be emersed in the conversations. My role isn’t to understand the technical stuff it’s to drive processes and find ways of working that can be improved. To gap fill where we are perhaps missing some guidelines and bring a different view.

Being a workplace Mental Health First Aider (MHFA) has allowed me to bring soft skills to the role. I feel I can use my instincts to know what kind of initiatives will bring about positive reactions and outcomes for my team. I have been described as a ‘consistent champion for team building and for the promotion of wellbeing in the team’. Something that can often overlooked as important but as someone who was worked in management and business a lot longer than me once said “software is a people business”.

Leveraging soft skills to improve a technical department

The biggest task I was set when I came to the role was look at the BU’s results from the mid-year engagement survey and look at the areas where we scored low and find some ways to improve what was being done (or not being done). I quickly identified three areas where we were struggling to meet expectations and those were: Engagement, Communication and Career Paths. This is where I was to bring my soft skills, previous role experience and new ideas to the table, working with others I knew I could influence some real change.

The first thing I identified that we needed to do was implement structure to our communications. We now have a regular monthly All Hands for the entire BU, a successful blog pipeline and several interesting Tech Talks that we host on a Thursday when we know we will have the most amount of people in the office to attend in person. Regular and reliable communication drives engagement.

Career progression was a much harder issue to address, mostly because everyone sees their pathway differently. Learning isn’t always work or fact based it can be personal development and R3 has always made it clear we support both. In the end the Leadership team agreed to implement a dedicated learning day for the BU. Now every third Friday of the month its booked in everyone’s calendar to remind them to utilize the learning platforms to undertake the courses/talks/exams that they want to in order to improve their knowledge.

We have seen definitive improvement in these areas in the feedback we have received recently and that is personally very gratifying. There is still a long way to go and other areas of focus going forwards, but I can honestly say that this improvement wouldn’t have been seen without someone whose dedicated role encompasses these tasks.

Some people might think this would be a solitary job but far from it. I am in the office most days; it wasn’t a requirement of the role, but I think there is a benefit to being present for the department. I am not part of any specific team (I am literally a lone branch on our BU org chart) but I literally get to float around teams constantly. If I organize a lunch for one small team or an after work social for another I am normally invited along, perks of the job 💁🏼‍♀️.

Grab the opportunity and make an impact

Looking back on when this chance was first offered to me nearly a year ago now the fear I had at that time has long gone, time really does fly when you are having fun and engrossed in making an impact. You aren’t offered opportunities if people don’t think you are the right person for it. Sometimes it takes a little pushing, but it will all be worth it in the end.

I hope that I have proved that there is a need for all types of workers in a software company. R3 is brilliant, and I would highly recommend it as a place where you will enjoy and feel proud to work. If this sounds like a bit of you check out our website with a variety of open roles: Job postings.

The post Being Non-Technical in a Technical Department appeared first on Corda.