Forking Brilliant

Houston We Have A Problem

I want to check that a transaction will work on mainnet for an account I don’t control. In this case it’s for a large LP in the Balancer staBal3 pool and I want to check they could successfully migrates their staBal3 to the new bb-a-USD using a Relayer multicall with the params created by the SDK.

This definitely isn’t the most elegant way of doing things but it works!

Whale Hunting

The first thing I need to do is to find a large staBal3 LP account and figure out their BPT balance. I can use the Balancer Subgraph to query account pool shares for the staBal3 pool. Query looks like:

query MyQuery {
  poolShares(where: {poolId: "0x06df3b2bbb68adc8b0e302443692037ed9f91b42000000000000000000000063"}, orderBy: balance, orderDirection: desc) {
    id
    balance
  }
}

I then manually worked my way down the list of addresses until I found one that was an EOA: https://etherscan.io/address/0x4086e3e1e99a563989a9390facff553a4f29b6ee and at the time of investiagation this had a BPT balance of: 10205792.037653741889318463 (a cool $10.28mil!).

Exit Stage Right

The SDK helper function (relayer.exitPoolAndBatchSwap) that creates the call data requires an input param of expectedAmountsOut which in this case represents the DAI/USDC/USDT amounts out when exiting the staBal3 pool. Because I don’t have the maths required for this exposed yet a quick way to get this is to see the output amounts using Withdraw in the UI. There’s a very nice tool that allows us to simulate this when we don’t have control of the account of interest: https://www.impersonator.xyz/

Now that I’ve got all the info required I can generate the call data by using the helper function. In this case we get an array of call data which represent an exitPool call on staBal3 followed by a batchSwap that swaps the stables received from the exit to the bb-a-USD BPT.

The Magic

Tenderly has lots of useful features including Transaction Simulations. To begin I tried simulating the multicall call on the Mainnet Relayer but the tx failed highlighting a new issue – the account hasn’t approved the Balancer Relayer. To get around this I can use a Tenderly Fork – “Forks allow you to chain simulation and test out complex scenarios with live on-chain data”. This is cool because I can now fork the chain make an approval on the relayer from the account which then allows me to succesfully simulate the multicall!

Photo by Joel Muniz on Unsplash

Building an SDK 0.1.0 – Improving SOR data sourcing

Intro

A big focus for Balancer Labs this year is to make it really easy to build on top of the protocol. To aid in that we’re putting together the `@balancer-labs/sdk npm package. As the current lead in this project I thought I’d try and document some of the work to help keep track of the changes, thought process and learning along the way. It’ll also be useful as a reminder of what’s going on!

SOR v2

Some background

We already have the Smart Order Router (@balancer-labs/sor), a package that devs can use to source the optimal routing for a swap using Balancer liquidity. It’s used in Balancers front-end and other projects like Copper and is a solver for Gnosis BGP. It’s also used in the Beethoven front-end (a Balancer friendly fork on Fantom, cool project and team and worth checking out).

The SOR is also used and exposed by the SDK. It’s core to making swaps accesible but is also used for joining/exiting Boosted Pools which uses PhantomBpt and swaps (a topic for another time I think!).

SOR Data

The diagram below shows some of the core parts of the SOR v2.

SOR v2

To choose the optimal routes for a swap the SOR needs information about the Balancer pools and the price of assets. And as we can see from the diagram the sourcing of this data is currently very tightly coupled to the SOR. Pools data is retrieved from the Subgraph and updated with on-chain balances using a multicall. And asset pricing is retrieved from CoinGecko.

Recently Beethoven experienced a pretty large growth spurt and found there were some major issues retrieving data from the Subgraph. They also correctly pointed out that CoinGecko doesn’t always have the asset pricing (especially on Fantom) and this information could be available from other sources.

After some discussions with Daniel (a very helpful dev from Beethoven) it was agreed that a good approach would be to refactor the SOR to create composability of data fetching so the user is able to have more control over where data is coming from. With this approach, the SOR doesn’t need to know anything about CoinGecko or the Subgraph and the data could now come from anywhere (database, cache, on chain, etc.), and as long as it implements the interface, the SOR will work properly.

Changes – SOR v3

I came back from Christmas break and Daniel had made all the changes – friendly forks for the win 💪! The interface changes are breaking but the improvements are worth it – SOR 3.0.0.

Config

The goal was to remove all the chain specific config from the SOR and pass it in as a constructor parameter. This helps to avoid non-scalable hard-coded values and encorages a single source of truth. It also gives more flexibility for the variables and makes the code easier to test.

There is now the SorConfig type:

export interface SorConfig {
    chainId: number;
    weth: string;
    vault: string;
    staBal3Pool?: { id: string; address: string };
    wethStaBal3?: { id: string; address: string };
    usdcConnectingPool?: { id: string; usdc: string };
}

Pool Data

The goal here is to allow for flexibility in defining where the pool data is fetched from. We define a generic PoolDataService that has a single function getPools, which serves as a generic interface for fetching pool data. This allows allow for any number of custom services to be used without having to change anything in the SOR or SDK.

export interface PoolDataService {
    getPools(): Promise<SubgraphPoolBase[]>;
}

Approaching it this way means all the Subgraph and on-chain/multicall fetching logic is removed from the SOR. These will be added to the Balancer SDK as stand-alone services. But as a simple example this is a PoolDataService that retrieves data from Subgraph:

export class SubgraphPoolDataService implements PoolDataService {
    constructor(
        private readonly chainId: number,
        private readonly subgraphUrl: string
    ) {}

    public async getPools(): Promise<SubgraphPoolBase[]> {
        const response = await fetch(this.subgraphUrl, {
            method: 'POST',
            headers: {
                Accept: 'application/json',
                'Content-Type': 'application/json',
            },
            body: JSON.stringify({ query: Query[this.chainId] }),
        });

        const { data } = await response.json();

        return data.pools ?? [];
    }
}

Asset Pricing

The goal here is to allow for flexibility in defining where token prices are fetched from. We define a generic TokenPriceService that has a single function getNativeAssetPriceInToken. Similar to the PoolDataService this offers flexibility in the service that can be used, i.e. CoingeckoTokenPriceService or SubgraphTokenPriceService.

export interface TokenPriceService {
    /**
     * This should return the price of the native asset (ETH) in the token defined by tokenAddress.
     * Example: BAL = $20 USD, ETH = $4,000 USD, then 1 ETH = 200 BAL. This function would return 200.
     * @param tokenAddress
     */
    getNativeAssetPriceInToken(tokenAddress: string): Promise<string>;
}

All the CoinGecko code is removed from the SOR (to be added to SDK). An example TokenPriceService using CoinGecko:

export class CoingeckoTokenPriceService implements TokenPriceService {
    constructor(private readonly chainId: number) {}

    public async getNativeAssetPriceInToken(
        tokenAddress: string
    ): Promise<string> {
        const ethPerToken = await this.getTokenPriceInNativeAsset(tokenAddress);

        // We get the price of token in terms of ETH
        // We want the price of 1 ETH in terms of the token base units
        return `${1 / parseFloat(ethPerToken)}`;
    }

    /**
     * @dev Assumes that the native asset has 18 decimals
     * @param tokenAddress - the address of the token contract
     * @returns the price of 1 ETH in terms of the token base units
     */
    async getTokenPriceInNativeAsset(tokenAddress: string): Promise<string> {
        const endpoint = `https://api.coingecko.com/api/v3/simple/token_price/${this.platformId}?contract_addresses=${tokenAddress}&vs_currencies=${this.nativeAssetId}`;

        const response = await fetch(endpoint, {
            headers: {
                Accept: 'application/json',
                'Content-Type': 'application/json',
            },
        });

        const data = await response.json();

        if (
            data[tokenAddress.toLowerCase()][this.nativeAssetId] === undefined
        ) {
            throw Error('No price returned from Coingecko');
        }

        return data[tokenAddress.toLowerCase()][this.nativeAssetId];
    }

    private get platformId(): string {
        switch (this.chainId) {
            case 1:
                return 'ethereum';
            case 42:
                return 'ethereum';
            case 137:
                return 'polygon-pos';
            case 42161:
                return 'arbitrum-one';
        }

        return '2';
    }

    private get nativeAssetId(): string {
        switch (this.chainId) {
            case 1:
                return 'eth';
            case 42:
                return 'eth';
            case 137:
                return '';
            case 42161:
                return 'eth';
        }

        return '';
    }
}

Final Outcome

After the changes the updated diagram shows how the SOR is more focused and less opinionated:

The plan for the Balancer front-end is to move away from using the SOR directly and use it via the SDK package. The SDK will have the data fetching functionality as serparate services (which can be used independetly for fetching pools, etc) and these will be passed to the SOR when the SDK is instantiated. BUT it’s also possible to use the SOR independendtly as shown in this swapExample.

This was a large and breaking change but with the continued issues with Subgraph and more teams using the SOR/SDK it was a neccessary upgrade. Many thanks to Daniel from the Beethoven team for pushing this through!

Etherscan API

Etherscan is a really useful website but fairly recently I discovered they also have a pretty handy Developer API. I used it to analyse gas a contract was using over a period of time and thought it might be useful to record.

Getting Started

First of all an API key is required and this can be created for free by logging in and going to: https://etherscan.io/myapikey.

I’m going to paste my code with comments which should be easy enough to follow. The API covers a whole lot of other end point such as Logs, Tokens, etc that I didn’t use but the docs are good and worth a scan.

Example

Photo by Markus Spiske on Unsplash

Buidler, Waffle & Ethers

Lately at Balancer we’ve moved from the Truffle development environment to using Buidler, Waffle and Ethers. The main benefit is being able to use console.log in Solidity during debugging – it’s amazing how much of a difference this makes and for this alone the change over is worth it. Here’s some notes I made during the switch over.

Ethers

The ethers.js library aims to be a complete and compact library for interacting with the Ethereum Blockchain and its ecosystem.

Documentation is here: https://docs.ethers.io and this Web3.js vs Ethers.js guide was useful.

The following gist demonstrates some basic usage of Ethers that creates an instance of a deployed contract and then running some calls against it:

Buidler & Waffle

Buidler is described as a ‘task runner’. I think its easiest to see it as a swap for Truffle/Ganache. It has lots of different plugins that make it really useful and its documentation was refreshingly good.

The Quickstart shows you how to install and how to run common tasks. It also uses Waffle for testing. Waffle is a simple smart contract testing library built on top of Ethers.js. Tests in Waffle are written using Mocha alongside with Chai and from my experience everything just worked. The docs are here. And its worth digging in to see some of the useful things it offers such as Chai Matchers which allow you to test things like reverts, events, etc.

Buidler commands I found I used a lot:

  • Run the local Buidler EVM: $ npx buidler node
  • Compile project contracts: $ npx buidler compile
  • Run tests: $ npx buidler test ./test/testfile.ts

Here’s an example test file I used that demonstrates a few useful things:

Static Calls

let poolAddr = await factory.callStatic.newBPool(); – The contract callStatic pretends that a call is not state-changing and returns the result. This does not actually change any state and is free.

Connecting Different Accounts

await _pools[1].connect(newUserSigner).approve(PROXY, MAX); – Using contract connect(signer) calls the contract via the signer specified.

Gas Costs

await proxy.connect(newUserSigner).exitswapExternAmountOut(
            _POOLS[1],
            MKR,
            amountOut,
            startingBptBalance,
            {
              gasPrice: 0
            }
        );

Setting the gasPrice to 0 like above allows me to run the transaction without spending any Eth on it. This was useful when checking Eth balance changes without having to worry about gas costs.

Custom accounts & balances

const config: BuidlerConfig = {
  solc: {
    version: "0.5.12",
    optimizer: {
      enabled: true,
      runs: 200,
    },
  },
  networks: {
    buidlerevm: {
      blockGasLimit: 20000000,
      accounts: [
        { privateKey: '0xPrefixedPrivateKey1', balance: '1000000000000000000000000000000' },
        { privateKey: '0xPrefixedPrivateKey2', balance: '1000000000000000000000000000000' }
      ]
    },
  },
};

I needed the test accounts to have more than the 1000Eth balance set by default. In buidler.config.ts you can add accounts with custom balances like above.

Deploying

Deploying is done using scripts. First I updated my buidler.config.ts with the account/key for Kovan that will be used to deploy (i.e. must have Eth):

const config: BuidlerConfig = {
  solc: {
    version: "0.5.12",
    optimizer: {
      enabled: true,
      runs: 200,
    },
  },
  networks: {
    buidlerevm: {
      blockGasLimit: 20000000,
    }
    kovan: {
      url: `https://kovan.infura.io/v3/${process.env.INFURA}`,
      accounts: [`${process.env.KEY}`]
    }
  },
};

Then I wrote a deploy-script.js:

async function main() {
  // We get the contract to deploy
  const ExchangeProxy = await ethers.getContractFactory("ExchangeProxy");
  const WETH = '0xd0A1E359811322d97991E03f863a0C30C2cF029C';
  const exchangeProxy = await ExchangeProxy.deploy(WETH);

  await exchangeProxy.deployed();

  console.log("Proxy deployed to:", exchangeProxy.address);
}

main()
  .then(() => process.exit(0))
  .catch(error => {
    console.error(error);
    process.exit(1);
  });

Then run this using: npx buidler run --network kovan deploy-script.js

🎉 Console Logging 🎉

One of the holy grails of Solidity development and so easy to setup in this case! There are also Solidity stack traces and error messages but unfortunately there was a bug that caused this not to work for our contracts.

To get this going all you need to do is add: import "@nomiclabs/buidler/console.sol"; at the top of your contract then use console.log. More details on what kind of outputs, etc it supports are here. Lifesaver!

Hope some of this was helpful and you enjoy using it as much as me.

(Photo by Kevin Jarrett on Unsplash)

Setting Up Testing – Typescript with Mocha/Chai

This is more a reminder post incase I ever have to do something like this again so its kind of boring!

For pure Javascript testing with Mocha/Chai:

$ yarn add mocha
$ yarn add chai

In package.json:

"scripts":{
    "test":"mocha"
}

Add a test dir:

$ mkdir test

Add first test file:

var expect  = require('chai').expect;
var request = require('request');

it('Main page content', function(done) {
    request('http://localhost:8080' , function(error, response, body) {
        expect(body).to.equal('Hello World');
        done();
    });
});

Now for Typescript:

$ yarn add typescript
$ yarn add ts-node --dev
$ yarn add @types/chai --dev
$ yarn add @types/mocha --dev

 Replace test script:

"test": "mocha -r ts-node/register test/*.spec.ts"

And make sure test is in root/test/example.spec.ts

import { expect, assert } from 'chai';
import 'mocha';
import { Pool } from '../src/types';
import { smartOrderRouter } from '../src/sor';
import { BigNumber } from '../src/utils/bignumber';
import { getSpotPrice, BONE } from '../src/helpers';

const errorDelta = 10 ** -8;

function calcRelativeDiff(expected: BigNumber, actual: BigNumber): BigNumber {
    return expected
        .minus(actual)
        .div(expected)
        .abs();
}

// These example pools are taken from python-SOR SOR_method_comparison.py
let balancers: Pool[] = [
    {
        id: '0x165021F95EFB42643E9c3d8677c3430795a29806',
        balanceIn: new BigNumber(1.341648768830377422).times(BONE),
        balanceOut: new BigNumber(84.610322835523687996).times(BONE),
        weightIn: new BigNumber(0.6666666666666666),
        weightOut: new BigNumber(0.3333333333333333),
        swapFee: new BigNumber(0.005).times(BONE),
    },
    {
        id: '0x31670617b85451E5E3813E50442Eed3ce3B68d19',
        balanceIn: new BigNumber(14.305796722007608821).times(BONE),
        balanceOut: new BigNumber(376.662367824920653194).times(BONE),
        weightIn: new BigNumber(0.6666666666666666),
        weightOut: new BigNumber(0.3333333333333333),
        swapFee: new BigNumber(0.000001).times(BONE),
    },
];

describe('Two Pool Tests', () => {
    it('should test spot price', () => {
        var sp1 = getSpotPrice(balancers[0]);
        var sp2 = getSpotPrice(balancers[1]);

        // Taken form python-SOR, SOR_method_comparison.py
        var sp1Expected = new BigNumber(7968240028251420);
        var sp2Expected = new BigNumber(18990231371439040);

        var relDif = calcRelativeDiff(sp1Expected, sp1);
        assert.isAtMost(
            relDif.toNumber(),
            errorDelta,
            'Spot Price Balancer 1 Incorrect'
        );

        relDif = calcRelativeDiff(sp2Expected, sp2);
        assert.isAtMost(
            relDif.toNumber(),
            errorDelta,
            'Spot Price Balancer 2 Incorrect'
        );
    });

    it('should test two pool SOR swap amounts', () => {
        var amountIn = new BigNumber(0.7).times(BONE);
        var swaps = smartOrderRouter(
            balancers,
            'swapExactIn',
            amountIn,
            10,
            new BigNumber(0)
        );

        // console.log(swaps[0].amount.div(BONE).toString())
        // console.log(swaps[1].amount.div(BONE).toString())
        assert.equal(swaps.length, 2, 'Should be two swaps for this example.');

        // Taken form python-SOR, SOR_method_comparison.py
        var expectedSwap1 = new BigNumber(635206783664651400);
        var relDif = calcRelativeDiff(expectedSwap1, swaps[0].amount);
        assert.isAtMost(relDif.toNumber(), errorDelta, 'First swap incorrect.');

        var expectedSwap2 = new BigNumber(64793216335348570);
        relDif = calcRelativeDiff(expectedSwap2, swaps[1].amount);
        assert.isAtMost(
            relDif.toNumber(),
            errorDelta,
            'Second swap incorrect.'
        );
    });

    it('should test two pool SOR swap amounts highestEpNotEnough False branch.', () => {
        var amountIn = new BigNumber(400).times(BONE);
        var swaps = smartOrderRouter(
            balancers,
            'swapExactIn',
            amountIn,
            10,
            new BigNumber(0)
        );

        // console.log(swaps[0].amount.div(BONE).toString())
        // console.log(swaps[1].amount.div(BONE).toString())
        assert.equal(swaps.length, 2, 'Should be two swaps for this example.');
        assert.equal(
            swaps[0].pool,
            '0x31670617b85451E5E3813E50442Eed3ce3B68d19',
            'First pool.'
        );
        assert.equal(
            swaps[1].pool,
            '0x165021F95EFB42643E9c3d8677c3430795a29806',
            'Second pool.'
        );

        // Taken form python-SOR, SOR_method_comparison.py with input changed to 400
        var expectedSwap1 = new BigNumber(326222020689680300000);
        var relDif = calcRelativeDiff(expectedSwap1, swaps[0].amount);
        assert.isAtMost(relDif.toNumber(), errorDelta, 'First swap incorrect.');

        var expectedSwap2 = new BigNumber(73777979310319780000);
        relDif = calcRelativeDiff(expectedSwap2, swaps[1].amount);
        assert.isAtMost(
            relDif.toNumber(),
            errorDelta,
            'Second swap incorrect.'
        );
    });

    // Check case mentioned in Discord
});

Photo by René Porter on Unsplash

TypeScript 1 – Getting Going & Migrating

I’m currently working on the Burner Signal project. So far I’ve created the React app that will hopefully be used as the proof of concept.

One problem – so far I’ve done everything in pure Javascript but there’s a strong desire to use TypeScript only.

Oh and another problem – I haven’t developed with TypeScript before! 🤔😂

But…this is a perfect opportunity to learn something new especially as the best way to learn something is to actually build something with it.

So after a bit of reading I do get what the proposed benefits of TypeScript are:

  • Because it uses Types and it transpiles to Javascript the compiler can catch errors – I can definitely see the benefits in this!
  • Using Types is a kind of self documentation
  • IDE integration – dev environments provide lots of TypeScript support which should make it more efficient and faster to develop

I will give it a shot and see if the above is true!

The first thing I need to do is get my current create-react-app which is using pure Javascript migrated to use TypeScript. It was surprisingly easy!

  1. yarn add typescript @types/node @types/react @types/react-dom @types/jest
  2. Change an existing .js file to use .jsx
  3. Restart server – this is important!
  4. That’s it!

Now to learn the basics of TypeScript. For this I’m using the React-TypeScript Cheatsheet and the first suggestion is to get familiar with TypeScript by following 2alitys guide which I’m working through next.

🎉✨🔥 Winner Winner! 🔥✨🎉

Well this is cool – I won the Gitcoin x Aave Hackathon!

My entry was a bot that does arbitrage between two Uniswap exchanges using an Aave Flashloan as the capital for initial trade. The Aave judges were “super impressed” with my work and I got a special mention for the way I overcame a testnet problem by forking UniSwap and customising the code to allow two trading between two exchanges with the same token.

Happy days!

Note – Deploying A Contract Via A Delegate With Truffle Set-Up

This is quite random but I had to learn a few things so worth taking note.

I was recently working on a Truffle Dapp. I had to deploy one of my contracts in a roundabout way – basically one account signing it but another paying the gas (Metatransactions are basically the same). I’d never done this before. After that I still wanted my Truffle Dapp to be able to access the deployed contract but this required a bit of a tweak. So here’s how I did it.

Deploying Via A Delegate

This was done using a simple node script and the process looks like this:

  • Unlock your ‘sender’ account and ‘delegate’ account
  • Compile the smart contract to get the bytecode
    • In this case I used Truffle to compile and accessed the bytecode from the artificat file
  • Create a transaction object with the bytecode as the call data
  • Sender signs the transaction object
  • Delegate sends the signed transaction
  • Note the receipt so you have access to the deployed contract address

I’ve included the code I used for this below.

Using Truffle With The Deployed Contract

Because the contract was deployed using the script the Truffle artificats, etc don’t have the required information so my Dapp couldn’t interact. By making a few manual changes I managed to get it to work:

  • Make sure your contract has been compiled previously using Truffle it should have an artifact file.
  • Find your Truffle artificat file for the contract. It should be of form YouContract.json and is probably under a ‘contract’ folder in your Dapps project.
  • Find the “networks” section of the artificact.
  • Add a new network entry with the following info:
    • Network ID – should be the network that you deployed to.
    • Address – The address your contract was deployed to (from the receipt)
    • TransactionHash – I don’t think this is actually required but it was handy to record it.
    • My entry is shown in the gist below.
  • That’s it! Your Dapp should now work with the deployed contract.

Code Gist

https://gist.github.com/johngrantuk/453de92c28bfae6848ebe13e3e62c74f

Photo by David Travis on Unsplash

NuCypher & Proxy Re-encryption

Photo by Joshua Sortino on Unsplash

 

In April I entered (and won!) the NuCypher+CoinList hackathon. I didn’t actually know much about the NuCypher tech before I got started but once I had built my DApp it was clear this is really interesting stuff and it’s stuck with me ever since as something interesting to build on.

Proxy Re-encryption

The NuCypher solution will eventually provide a decentralised privacy infrastructure but during the hackathon I was mainly making use of a subset of the tech, Proxy Re-encryption.

Proxy re-encryption is a set of encryption algorithms that allow you to transform encrypted data. Specifically… it allows you to re-encrypt data — so you have data that’s encrypted under one set of keys, you can re-encrypt the data without de-encrypting it first, so that now it’s encrypted under a second, different set of keys —NuCypher co-founder MacLane Wilkison

So What?

To understand why this is pretty awesome imagine I have some encrypted data I want to share with Bob, what are the options to do this?

Crazy way – I just give me private encryption key to Bob (who I’m sharing the data with) who can use it to decrypt the data. But now Bob has my key and who knows where this ends up.

Inefficient way – I decrypt the encrypted data then rencrypt it using Bobs public key. This is more secure for sure but I have to do a lot more work. What if I have to do this many times? What if the encrypted data is stored and accessed over a network? Hows the information all being shared? Intensive!

How about the Proxy Re-encryption way:

With Proxy Re-encryption I encrypt the data once.

The encrypted data can be stored anywhere — Amazon, Dropbox, IPFS, etc. I only need to upload it once and provide access to the Proxy service (eventually this will be a NuCypher decentralised service)

The Proxy can rencrypt the data for anyone else I choose (provided I have their public key) efficiently and without ever having access to the decrypted data.

Bob decrypts the data using his own key and resources.

If the data I’m sharing is a stream, i.e. a Twitter feed, then I can enable/revoke decryption access whenever I want — i.e. I can stop someone seeing the data.

NuCypher will eventually provide a decentralised privacy infrastructure which will replace a centralized proxy with a decentralized network. A really good overview of the NuCypher solution is here.

Combine all this with decentralised smart contract as a source of access — very cool!

My DApp — thisfeedisalwaysforsale

My DApp was innspired by Simon de la Rouvieres This Artwork Is Always On Sale where he implements a Harberger Tax on the ownership of a digital artwork. In my app, instead of an artwork, access to a feed of data is always for sale. NuCypher is used to encrypt the data and only the current Patron can decrypt (using NuCypher) to get access. Anyone can buy this access from the current Patron for the sale price set when they took ownership. Whilst they hold ownership they pay a 5% fee to the feed owner. In the demo app the data is a Twitter like feed but the concept could be extended to have more than one Patron and could also be used for other kinds of feed data such as sensor data, camera/video feeds, music, etc.

I was super happy to get a mention in Token Economy as Stefanos favourite entry!

Cool Crypto — Sending Value With A Link

One of the functions I found the most interesting was the ability to send some value, in this case xDai, to someone using a link (try out the app here). The functionality behind this method is pretty cool and makes use of a lot of web3/crypto fundamentals. I found it interesting to dig into it and thought it would be worth sharing.

To Begin

First of all, the Dapp isn’t really ‘sending’ the xDai, it’s more of a deposit/claim pattern with some cool cryptography used to make sure only the person with the correct information, provided by the sender via the link, can claim the value.

Secondly there are two parts — the web Dapp and the smart contract on the blockchain. The web Dapp is really just a nice way of interacting with the smart contract.

Step By Step

A step by step description helped me get it into my head. Please note that I’m not showing all the details of the code here, it’s more a high level description to show the concept.

Sending

Using the Dapp the ‘sender’ enters the amount they want to send from and hits send.

App Send Screen

Once send is hit the Dapp does a bit of work in the background to get the inputs required by the smart contract set-up.

The Dapp uses web3.js to hash some random data:

let randomHash = web3.utils.sha3("" + Math.random());

Now the Dapp uses web3.js to generate a random account which will have a private key and a public key:

let randomWallet = web3.eth.accounts.create();

The random hashed data is then signed using the random wallet private key (see below for more details on signing, etc):

let sig = web3.eth.accounts.sign(randomHash, randomWallet.privateKey);

The Dapp sends a transaction to the blockchain smart contract with the value equal to the amount that is being sent along with the signature and the hashed data:

Contract.send(randomHash, sig.signature), 140000, false, value ...
// This is just a pseudo code to give the gist, see the repo for the full code

The smart contract contains a mapping of Fund structures to bytes32 ID keys:

struct Fund {
    address sender;
    address signer;
    uint256 value;
    uint256 nonce;
    bool claimed;
}

mapping (bytes32 => Fund) public funds;

When the Dapp ‘sends’ the value the smart contract creates a new Fund structure with the signer field set to the public key of the random wallet created by the Dapp. This field is important as it is used as the check when a claim is made:

newFund = Fund({
    sender: msg.sender,
    signer: randomWallet.publicKey,
    value: msg.value,
    nonce: nonce,
    claimed: false
})

Now the newFund is mapped using the randomHash value as the key:

funds[randomHash] = newFund;

The xDai from the sender has now been sent to the smart contract and is ready to be claimed by anyone with the required information.

Finally the Dapp generates a link with the randomHash and random wallet private key as the link parameters:

Link Format: xDai.io/randomHash;privateKey

The link can then be copied and sent via WhatsApp, SMS, etc.

The Claim:

Here it’s probably worth noting the link is really just a nice way to share the important information required to claim the xDai. The Dapp also does the hard work of interacting with the blockchain smart contract.

When the link is visited the Dapp parses the randomHash and the privateKey from the link.

It then signs a message using the privateKey from the link:

let accountHash = web3.utils.sha3(claimAccount);
let sig = web3.eth.accounts.sign(accountHash, privateKey);

Now the smart contract claim function is called using the signature and the original data:

Contact.claim(accountHash, sig, randomHash, claimAccount)

The Solidity ecrecover function is used to get the public address from the signature (this is the magic, see the info below):

address signer = recoverSigner(accountHash, sig);

Finally, the smart contract checks the fund with key matching randomHash has the ‘signer’ equal to the address recovered from the signature. If it does then it can send the value to the claimers account:

if(funds[randomHash].signer == signer && funds[randomHash].claimed == false){    
    funds[randomHash].claimed = true;
    claimAccount.send(funds[randomHash].value);
}

Phew, that’s it! It feels like a lot going on but the basics is it’s a smart way for a user to store value in a smart contract that can only be claimed with the correct information without showing what that information is on the public blockchain.

The Cool Crytography

Signing, ecrecover, eh what?? There’s some things that are probably worth going into a bit more detail.

Wallets, Accounts, etc

An account generated with web3.eth.accounts.create() has it’s own a private key and public key. More info can be found in the docs and here. The private and public keys are linked through an algorithm that has signing and validation properties.

Signing & Validating

The following is a very brief summary of this helpful post.

Signing is the act of a user “signing” data that anyone can validate came from that user.

The signing function will take in a private key and the data. The output will be another string that is the signature.

To validate the signature is from the owner of the private key the signature, the original data and the public key is required.

A validator function is run that recovers the public key from the signed data.

The recovered public key is then compared to the original one and if both are the same the signature is valid.

ecrecover

In this case the Solidity ecrecover (Eliptic Curve Recover) function is used to recover the address associated with the public key. The recoverSigner function in the smart contract code shows an example of how this is done and this is a pretty decent explanation of what is going on.

I think that’s a pretty awesome example of crypto in action!