Setting Up Testing – Typescript with Mocha/Chai

This is more a reminder post incase I ever have to do something like this again so its kind of boring!

For pure Javascript testing with Mocha/Chai:

$ yarn add mocha
$ yarn add chai

In package.json:

"scripts":{
    "test":"mocha"
}

Add a test dir:

$ mkdir test

Add first test file:

var expect  = require('chai').expect;
var request = require('request');

it('Main page content', function(done) {
    request('http://localhost:8080' , function(error, response, body) {
        expect(body).to.equal('Hello World');
        done();
    });
});

Now for Typescript:

$ yarn add typescript
$ yarn add ts-node --dev
$ yarn add @types/chai --dev
$ yarn add @types/mocha --dev

 Replace test script:

"test": "mocha -r ts-node/register test/*.spec.ts"

And make sure test is in root/test/example.spec.ts

import { expect, assert } from 'chai';
import 'mocha';
import { Pool } from '../src/types';
import { smartOrderRouter } from '../src/sor';
import { BigNumber } from '../src/utils/bignumber';
import { getSpotPrice, BONE } from '../src/helpers';

const errorDelta = 10 ** -8;

function calcRelativeDiff(expected: BigNumber, actual: BigNumber): BigNumber {
    return expected
        .minus(actual)
        .div(expected)
        .abs();
}

// These example pools are taken from python-SOR SOR_method_comparison.py
let balancers: Pool[] = [
    {
        id: '0x165021F95EFB42643E9c3d8677c3430795a29806',
        balanceIn: new BigNumber(1.341648768830377422).times(BONE),
        balanceOut: new BigNumber(84.610322835523687996).times(BONE),
        weightIn: new BigNumber(0.6666666666666666),
        weightOut: new BigNumber(0.3333333333333333),
        swapFee: new BigNumber(0.005).times(BONE),
    },
    {
        id: '0x31670617b85451E5E3813E50442Eed3ce3B68d19',
        balanceIn: new BigNumber(14.305796722007608821).times(BONE),
        balanceOut: new BigNumber(376.662367824920653194).times(BONE),
        weightIn: new BigNumber(0.6666666666666666),
        weightOut: new BigNumber(0.3333333333333333),
        swapFee: new BigNumber(0.000001).times(BONE),
    },
];

describe('Two Pool Tests', () => {
    it('should test spot price', () => {
        var sp1 = getSpotPrice(balancers[0]);
        var sp2 = getSpotPrice(balancers[1]);

        // Taken form python-SOR, SOR_method_comparison.py
        var sp1Expected = new BigNumber(7968240028251420);
        var sp2Expected = new BigNumber(18990231371439040);

        var relDif = calcRelativeDiff(sp1Expected, sp1);
        assert.isAtMost(
            relDif.toNumber(),
            errorDelta,
            'Spot Price Balancer 1 Incorrect'
        );

        relDif = calcRelativeDiff(sp2Expected, sp2);
        assert.isAtMost(
            relDif.toNumber(),
            errorDelta,
            'Spot Price Balancer 2 Incorrect'
        );
    });

    it('should test two pool SOR swap amounts', () => {
        var amountIn = new BigNumber(0.7).times(BONE);
        var swaps = smartOrderRouter(
            balancers,
            'swapExactIn',
            amountIn,
            10,
            new BigNumber(0)
        );

        // console.log(swaps[0].amount.div(BONE).toString())
        // console.log(swaps[1].amount.div(BONE).toString())
        assert.equal(swaps.length, 2, 'Should be two swaps for this example.');

        // Taken form python-SOR, SOR_method_comparison.py
        var expectedSwap1 = new BigNumber(635206783664651400);
        var relDif = calcRelativeDiff(expectedSwap1, swaps[0].amount);
        assert.isAtMost(relDif.toNumber(), errorDelta, 'First swap incorrect.');

        var expectedSwap2 = new BigNumber(64793216335348570);
        relDif = calcRelativeDiff(expectedSwap2, swaps[1].amount);
        assert.isAtMost(
            relDif.toNumber(),
            errorDelta,
            'Second swap incorrect.'
        );
    });

    it('should test two pool SOR swap amounts highestEpNotEnough False branch.', () => {
        var amountIn = new BigNumber(400).times(BONE);
        var swaps = smartOrderRouter(
            balancers,
            'swapExactIn',
            amountIn,
            10,
            new BigNumber(0)
        );

        // console.log(swaps[0].amount.div(BONE).toString())
        // console.log(swaps[1].amount.div(BONE).toString())
        assert.equal(swaps.length, 2, 'Should be two swaps for this example.');
        assert.equal(
            swaps[0].pool,
            '0x31670617b85451E5E3813E50442Eed3ce3B68d19',
            'First pool.'
        );
        assert.equal(
            swaps[1].pool,
            '0x165021F95EFB42643E9c3d8677c3430795a29806',
            'Second pool.'
        );

        // Taken form python-SOR, SOR_method_comparison.py with input changed to 400
        var expectedSwap1 = new BigNumber(326222020689680300000);
        var relDif = calcRelativeDiff(expectedSwap1, swaps[0].amount);
        assert.isAtMost(relDif.toNumber(), errorDelta, 'First swap incorrect.');

        var expectedSwap2 = new BigNumber(73777979310319780000);
        relDif = calcRelativeDiff(expectedSwap2, swaps[1].amount);
        assert.isAtMost(
            relDif.toNumber(),
            errorDelta,
            'Second swap incorrect.'
        );
    });

    // Check case mentioned in Discord
});

Photo by René Porter on Unsplash

TypeScript 1 – Getting Going & Migrating

I’m currently working on the Burner Signal project. So far I’ve created the React app that will hopefully be used as the proof of concept.

One problem – so far I’ve done everything in pure Javascript but there’s a strong desire to use TypeScript only.

Oh and another problem – I haven’t developed with TypeScript before! 🤔😂

But…this is a perfect opportunity to learn something new especially as the best way to learn something is to actually build something with it.

So after a bit of reading I do get what the proposed benefits of TypeScript are:

  • Because it uses Types and it transpiles to Javascript the compiler can catch errors – I can definitely see the benefits in this!
  • Using Types is a kind of self documentation
  • IDE integration – dev environments provide lots of TypeScript support which should make it more efficient and faster to develop

I will give it a shot and see if the above is true!

The first thing I need to do is get my current create-react-app which is using pure Javascript migrated to use TypeScript. It was surprisingly easy!

  1. yarn add typescript @types/node @types/react @types/react-dom @types/jest
  2. Change an existing .js file to use .jsx
  3. Restart server – this is important!
  4. That’s it!

Now to learn the basics of TypeScript. For this I’m using the React-TypeScript Cheatsheet and the first suggestion is to get familiar with TypeScript by following 2alitys guide which I’m working through next.

🎉✨🔥 Winner Winner! 🔥✨🎉

Well this is cool – I won the Gitcoin x Aave Hackathon!

My entry was a bot that does arbitrage between two Uniswap exchanges using an Aave Flashloan as the capital for initial trade. The Aave judges were “super impressed” with my work and I got a special mention for the way I overcame a testnet problem by forking UniSwap and customising the code to allow two trading between two exchanges with the same token.

Happy days!

Note – Deploying A Contract Via A Delegate With Truffle Set-Up

This is quite random but I had to learn a few things so worth taking note.

I was recently working on a Truffle Dapp. I had to deploy one of my contracts in a roundabout way – basically one account signing it but another paying the gas (Metatransactions are basically the same). I’d never done this before. After that I still wanted my Truffle Dapp to be able to access the deployed contract but this required a bit of a tweak. So here’s how I did it.

Deploying Via A Delegate

This was done using a simple node script and the process looks like this:

  • Unlock your ‘sender’ account and ‘delegate’ account
  • Compile the smart contract to get the bytecode
    • In this case I used Truffle to compile and accessed the bytecode from the artificat file
  • Create a transaction object with the bytecode as the call data
  • Sender signs the transaction object
  • Delegate sends the signed transaction
  • Note the receipt so you have access to the deployed contract address

I’ve included the code I used for this below.

Using Truffle With The Deployed Contract

Because the contract was deployed using the script the Truffle artificats, etc don’t have the required information so my Dapp couldn’t interact. By making a few manual changes I managed to get it to work:

  • Make sure your contract has been compiled previously using Truffle it should have an artifact file.
  • Find your Truffle artificat file for the contract. It should be of form YouContract.json and is probably under a ‘contract’ folder in your Dapps project.
  • Find the “networks” section of the artificact.
  • Add a new network entry with the following info:
    • Network ID – should be the network that you deployed to.
    • Address – The address your contract was deployed to (from the receipt)
    • TransactionHash – I don’t think this is actually required but it was handy to record it.
    • My entry is shown in the gist below.
  • That’s it! Your Dapp should now work with the deployed contract.

Code Gist

https://gist.github.com/johngrantuk/453de92c28bfae6848ebe13e3e62c74f

Photo by David Travis on Unsplash

NuCypher & Proxy Re-encryption

Photo by Joshua Sortino on Unsplash

 

In April I entered (and won!) the NuCypher+CoinList hackathon. I didn’t actually know much about the NuCypher tech before I got started but once I had built my DApp it was clear this is really interesting stuff and it’s stuck with me ever since as something interesting to build on.

Proxy Re-encryption

The NuCypher solution will eventually provide a decentralised privacy infrastructure but during the hackathon I was mainly making use of a subset of the tech, Proxy Re-encryption.

Proxy re-encryption is a set of encryption algorithms that allow you to transform encrypted data. Specifically… it allows you to re-encrypt data — so you have data that’s encrypted under one set of keys, you can re-encrypt the data without de-encrypting it first, so that now it’s encrypted under a second, different set of keys —NuCypher co-founder MacLane Wilkison

So What?

To understand why this is pretty awesome imagine I have some encrypted data I want to share with Bob, what are the options to do this?

Crazy way – I just give me private encryption key to Bob (who I’m sharing the data with) who can use it to decrypt the data. But now Bob has my key and who knows where this ends up.

Inefficient way – I decrypt the encrypted data then rencrypt it using Bobs public key. This is more secure for sure but I have to do a lot more work. What if I have to do this many times? What if the encrypted data is stored and accessed over a network? Hows the information all being shared? Intensive!

How about the Proxy Re-encryption way:

With Proxy Re-encryption I encrypt the data once.

The encrypted data can be stored anywhere — Amazon, Dropbox, IPFS, etc. I only need to upload it once and provide access to the Proxy service (eventually this will be a NuCypher decentralised service)

The Proxy can rencrypt the data for anyone else I choose (provided I have their public key) efficiently and without ever having access to the decrypted data.

Bob decrypts the data using his own key and resources.

If the data I’m sharing is a stream, i.e. a Twitter feed, then I can enable/revoke decryption access whenever I want — i.e. I can stop someone seeing the data.

NuCypher will eventually provide a decentralised privacy infrastructure which will replace a centralized proxy with a decentralized network. A really good overview of the NuCypher solution is here.

Combine all this with decentralised smart contract as a source of access — very cool!

My DApp — thisfeedisalwaysforsale

My DApp was innspired by Simon de la Rouvieres This Artwork Is Always On Sale where he implements a Harberger Tax on the ownership of a digital artwork. In my app, instead of an artwork, access to a feed of data is always for sale. NuCypher is used to encrypt the data and only the current Patron can decrypt (using NuCypher) to get access. Anyone can buy this access from the current Patron for the sale price set when they took ownership. Whilst they hold ownership they pay a 5% fee to the feed owner. In the demo app the data is a Twitter like feed but the concept could be extended to have more than one Patron and could also be used for other kinds of feed data such as sensor data, camera/video feeds, music, etc.

I was super happy to get a mention in Token Economy as Stefanos favourite entry!

Cool Crypto — Sending Value With A Link

One of the functions I found the most interesting was the ability to send some value, in this case xDai, to someone using a link (try out the app here). The functionality behind this method is pretty cool and makes use of a lot of web3/crypto fundamentals. I found it interesting to dig into it and thought it would be worth sharing.

To Begin

First of all, the Dapp isn’t really ‘sending’ the xDai, it’s more of a deposit/claim pattern with some cool cryptography used to make sure only the person with the correct information, provided by the sender via the link, can claim the value.

Secondly there are two parts — the web Dapp and the smart contract on the blockchain. The web Dapp is really just a nice way of interacting with the smart contract.

Step By Step

A step by step description helped me get it into my head. Please note that I’m not showing all the details of the code here, it’s more a high level description to show the concept.

Sending

Using the Dapp the ‘sender’ enters the amount they want to send from and hits send.

App Send Screen

Once send is hit the Dapp does a bit of work in the background to get the inputs required by the smart contract set-up.

The Dapp uses web3.js to hash some random data:

let randomHash = web3.utils.sha3("" + Math.random());

Now the Dapp uses web3.js to generate a random account which will have a private key and a public key:

let randomWallet = web3.eth.accounts.create();

The random hashed data is then signed using the random wallet private key (see below for more details on signing, etc):

let sig = web3.eth.accounts.sign(randomHash, randomWallet.privateKey);

The Dapp sends a transaction to the blockchain smart contract with the value equal to the amount that is being sent along with the signature and the hashed data:

Contract.send(randomHash, sig.signature), 140000, false, value ...
// This is just a pseudo code to give the gist, see the repo for the full code

The smart contract contains a mapping of Fund structures to bytes32 ID keys:

struct Fund {
    address sender;
    address signer;
    uint256 value;
    uint256 nonce;
    bool claimed;
}

mapping (bytes32 => Fund) public funds;

When the Dapp ‘sends’ the value the smart contract creates a new Fund structure with the signer field set to the public key of the random wallet created by the Dapp. This field is important as it is used as the check when a claim is made:

newFund = Fund({
    sender: msg.sender,
    signer: randomWallet.publicKey,
    value: msg.value,
    nonce: nonce,
    claimed: false
})

Now the newFund is mapped using the randomHash value as the key:

funds[randomHash] = newFund;

The xDai from the sender has now been sent to the smart contract and is ready to be claimed by anyone with the required information.

Finally the Dapp generates a link with the randomHash and random wallet private key as the link parameters:

Link Format: xDai.io/randomHash;privateKey

The link can then be copied and sent via WhatsApp, SMS, etc.

The Claim:

Here it’s probably worth noting the link is really just a nice way to share the important information required to claim the xDai. The Dapp also does the hard work of interacting with the blockchain smart contract.

When the link is visited the Dapp parses the randomHash and the privateKey from the link.

It then signs a message using the privateKey from the link:

let accountHash = web3.utils.sha3(claimAccount);
let sig = web3.eth.accounts.sign(accountHash, privateKey);

Now the smart contract claim function is called using the signature and the original data:

Contact.claim(accountHash, sig, randomHash, claimAccount)

The Solidity ecrecover function is used to get the public address from the signature (this is the magic, see the info below):

address signer = recoverSigner(accountHash, sig);

Finally, the smart contract checks the fund with key matching randomHash has the ‘signer’ equal to the address recovered from the signature. If it does then it can send the value to the claimers account:

if(funds[randomHash].signer == signer && funds[randomHash].claimed == false){    
    funds[randomHash].claimed = true;
    claimAccount.send(funds[randomHash].value);
}

Phew, that’s it! It feels like a lot going on but the basics is it’s a smart way for a user to store value in a smart contract that can only be claimed with the correct information without showing what that information is on the public blockchain.

The Cool Crytography

Signing, ecrecover, eh what?? There’s some things that are probably worth going into a bit more detail.

Wallets, Accounts, etc

An account generated with web3.eth.accounts.create() has it’s own a private key and public key. More info can be found in the docs and here. The private and public keys are linked through an algorithm that has signing and validation properties.

Signing & Validating

The following is a very brief summary of this helpful post.

Signing is the act of a user “signing” data that anyone can validate came from that user.

The signing function will take in a private key and the data. The output will be another string that is the signature.

To validate the signature is from the owner of the private key the signature, the original data and the public key is required.

A validator function is run that recovers the public key from the signed data.

The recovered public key is then compared to the original one and if both are the same the signature is valid.

ecrecover

In this case the Solidity ecrecover (Eliptic Curve Recover) function is used to recover the address associated with the public key. The recoverSigner function in the smart contract code shows an example of how this is done and this is a pretty decent explanation of what is going on.

I think that’s a pretty awesome example of crypto in action!

Ethereum — Vyper Development Using Truffle

Why Vyper?

Vyper is a contract-oriented, pythonic programming language that targets the Ethereum Virtual Machine (EVM)

Vyper is a relatively new language that has been written with a focus on security, simplicity and audibility. It’s written in a Pythonic way which appeals to me and as a more secure alternative to Solidity I think it has a lot of potential. I plan on writing more about working with Vyper in the future.

Truffle — Too Much Of A Sweet Tooth?

I’ve recently finished working on a hackathon project and completed the 2018 ConsenSys Academy and during that time, for better or worse, I’ve become pretty accustomed to using the Truffle development environment for writing code, testing and deploying— it just makes life easier.

So, in an ideal world I’d like to use Truffle for working with Vyper. After a bit of investigation I found this ERC721 Vyper implementation by Maurelian who did the work to make it Truffle compatible. I thought it might be useful to document the build process for use in other projects.

How To — Vyper Development Using Truffle

Install Vyper

The first step is to make sure Vyper is installed locally. If this has been done before you can skip — you can check by running the $ vyper -h command. There are various ways to install, including using PIP, the docs are here. I’m using a Mac and did the following:

Set up virtual environment:

$ virtualenv -p python3.6 --no-site-packages ~/vyper-venv

Remeber to activate the environmet:

$ source ~/vyper-venv/bin/activate

Then in my working dir:

$ git clone https://github.com/ethereum/vyper.git
$ cd vyper
$ make
$ make test

Install Truper

Next I installed Truper, a tool written by Maurelian to compile Vyper contracts to Truffle compatible artifacts. It uses Vyper which is why we installed it previously. (See the next section for details of what it’s doing). To install run:

$ npm i -g truper

Compiling, Testing, Deploying

From your project dir (you can clone the ERC-721 project for a quick test).

Run ganache test network:

$ ganache-cli

Compile any Solidity contracts as usual using:

$ truffle compile

Compile Vyper contracts using the command:

$ truper
* this must be called from the project dir and you must have the virtual environment you built Vyper in running.

Truffle tests can be written and run the usual way, i.e.:

Use artifacts in test files:
const NFToken = artifacts.require('NFToken.vyper');
Run tests using:
$ truffle test

Truffle migrations also work the usual way. For example I used the following migration file to deploy to ganache:

2_deploy_contracts.js
const NFToken = artifacts.require('NFToken.vyper');
const TokenReceiverMockVyper = artifacts.require('NFTokenReceiverTestMock.vyper');
module.exports = function(deployer) {
  deployer.deploy(NFToken, [], []);
  deployer.deploy(TokenReceiverMockVyper);
};
$ truffle migrate

What’s Going On

Truper uses Vyper which is why we installed it in the first step. If we look at https://github.com/maurelian/truper/blob/master/index.js we can see Truper is creating Truffle artifact files for each Vyper contract and writing them to the ./build/contracts folder of the project.

Truffle Artifact Files

These *.json files contain descriptions of their respective smart contracts. The description includes:

  • Contract name
  • Contract ABI (Application Binary Interface — a list of all the functions in the smart contracts along with their parameters and return values). Created by Truper using: $ vyper -f json file.vy
  • Contract bytecode (compiled contract data). Created by Truper using: $ vyper -f bytecode file.vy
  • Contract deployed bytecode (the latest version of the bytecode which was deployed to the blockchain). Created by Truper using: $ vyper -f bytecode_runtime file.vy
  • The compiler version with which the contract was last compiled. (Doesn’t appear to get added until deployed.)
  • A list of networks onto which the contract has been deployed and the address of the contract on each of those networks. (Doesn’t appear to get added until deployed.)

Maurelian describes it as a hacky stop-gap but it works so thank you!

Progress

Well that’s been a fun and productive couple of months!

ConsenSys Academy 2018

I’m now officially a ConsenSys certified dApp Developer 👊! (Certificate apparently on its way)

The ConsenSys Developer course was definitely worthwhile. I covered a lot of Blockchain theory while following the course lectures and taking the quizzes. The real learning and fun came from the final project where I actually had to build something.

ConSensys Academy Final Project

My final project was a bounty DApp that allows anyone to upload a picture of an item they want identified along with an associated bounty in Eth for the best answer. I got a lot of experience using the various parts of the Web3 technology stack. I used Truffle for development/testing, IPFS for storing the pictures and data (was cool to use this, very powerful idea), uPort for identity, OpenZeppelin libraries (which are really useful) an upgradeable design pattern, deployment to Rinkeby and lots of practice securing and testing smart contracts.

Colony Hackathon Winner

I also managed to bag myself a prize in the Colony Hackathon for my decentralised issue reporting app. I got the Creativity Honorable Mention which was pretty cool and I used my winnings to buy a Devcon IV ticket ✈️ 🤘!!

The Learnings

I came across a few things that I wanted to do while I was #BUIDLING but couldn’t easily find the info on so I’ve been keeping a kind of cheat sheet. Hopefully it might help someone else out there.

https://github.com/johngrantuk/dAppCheatSheet/blob/master/README.md

The Future Is Bright

The last few months I’ve confirmed to myself that the Blockchain/Ethereum world is something I want to be involved in. There’s so many different, exciting areas to investigate further, now I just have to chose one and dive further down the rabbit hole!

Django – Custom Migrations

The Problem

I have a Django app running on Heroku. Recently I had to change one of the models to add a new ForeignKey. The app in production has data stored in the database that I want to keep and this existing data can be used to define the new ForeignKey field.

After a bit of searching I discovered I can run a custom migration to manipulate the data in the database. Before now I never really understood what migrations were really doing. As usual the Django docs are really good and for this particular case I found this How to Create Django Data Migrations blog post useful – Vitor Freitas stuff is always good.

The Solution

First I added the new field to the model:
from django.db import models

class modelToChange(models.Model):
   manualAz = models.TextField(default="0")
   manualEl = models.TextField(default="0")
   manualPol = models.TextField(default="0")
   newField = models.ForeignKey(FkModel, null=True)
Next I ran the makemigrations command. The makemigrations command is responsible for creating new migrations based on the changes you have made to your models (kind of obvious really).
$python manage.py makemigrations
This creates a new migration file in the app/migrations folder which looks like:
# -*- coding: utf-8 -*-
# Generated by Django 1.11.7 on 2018-02-08 11:57
from __future__ import unicode_literals

from django.db import migrations, models
import django.db.models.deletion

class Migration(migrations.Migration):

dependencies = [
 ('myApp', '0015_auto_20180201_0936'),
 ]

operations = [
 migrations.AddField(
   model_name='modelToChange',
   name='newField',
   field=models.ForeignKey(null=True,         on_delete=django.db.models.deletion.CASCADE, to='myApp.FkModel'),
 ),
 ]

This adds a new field, ‘newField’, to the existing model, ‘modelToChange’. The field is a ForeignKey field link to another model class called FkModel.

Normally I’d run the migrate command next. Migrate – is responsible for applying and unapplying migrations.  Basically, it updates the database. In this case I want to add some custom code to the migration to update the new field using existing data.

After the additions the migration file looks like this:

# -*- coding: utf-8 -*-
# Generated by Django 1.11.7 on 2018-02-08 11:57
from __future__ import unicode_literals

from django.db import migrations, models
import django.db.models.deletion


def addCustom(apps, schema_editor):

  ExistingRecords = apps.get_model('myApp', 'modelToChange')
  ForeignKeyRecords = apps.get_model('myApp', 'FkModel')

  for message in ExistingRecords.objects.all():
    fkRecord =ForeignKeyRecords.objects.filter(id=message.id)
    
    if len(fkRecord) == 0:
      fkRecord =ForeignKeyRecords(id="unknown:" + message.id)
      fkRecord.save()
    else:
      fkRecord =fkRecord[0]

   message.fkRecord =fkRecord
   message.save()

class Migration(migrations.Migration):

  dependencies = [
    ('myApp', '0015_auto_20180201_0936'),
  ]

  operations = [
    migrations.AddField(
    model_name='modelToChange',
    name='newField',
    field=models.ForeignKey(null=True,     on_delete=django.db.models.deletion.CASCADE, to='myApp.FkModel'),
    ),

    migrations.RunPython(addCustom),
  ]

Notice the addition of migrations.RunPython(addCustom) to the Migration class and the new function, addCustom.

The last step is just to run the migrate command:

$python manage.py migrate

When the migrate command is run the addCustom function will be called. The addCustom function iterates through existing records in the database and adds the foreign key object to the existing data.

Heroku Application

It’s also possible to run custom migrations on a Heroku application. Migration files will be pushed to the app along with new code. Once this is deployed the following command can be run:

heroku run python manage.py migrate.

You can put your app in maintenance first with:

heroku maintenance:on

Django And Heroku Postgres Databases

I’m using Heroku to run one of my Django applications. The application has a Heroku Postgres Add-on for storing data. I’d like to use this ‘live’ data when I’m working on the application on my local development set-up.

I can see two options – connect directly to the live database or retrieve a copy to use locally. Using the live database directly could lead to issues if I make a mistake so I’m going with the local copy.

PostgreSQL Mac Set-up

Initially my local app was using SQLite so the first thing I need to do is get my development Mac set up for PostgreSQL. As detailed in the Heroku docs I used the Postgres.app package.

To confirm it was running ok:

$ which psql
 /Applications/Postgres.app/Contents/Versions/latest/bin/psql

And verified:

$ psql -h localhost
 psql (10.1)
 Type "help" for help.

Making A Local Database Copy

The pg:pull command can be used to pull remote data from a Heroku Postgres database to a database on the local machine. I also want to use a user and password so the command I used is:

PGUSER=postgres PGPASSWORD=password heroku pg:pull DATABASE_URL bcmlocaldb

A few things to note:

  • This is run from the main application directory on my local machine.
  • PGUSER and PGPASSWORD set the authentication credentials for the local db.
  • My Django app has the Database URL stored under the DATABASE_URL environment variable. The URL can be viewed with the $ heroku config command or on the Heroku dashboard if you want to use it directly.
  • bcmlocaldb is the name of the new local db.

Once the command has successfully run I could then see the db in the PostgresSQL dashboard on my local machine.

PostgresSQL Dashboard

Django Settings To Use Local DB

Now I have a local copy of the db I just need to change the local apps settings to use this instead of the old Sqlite. I’m using the DJ-Database-URL utility to configure my environment variable so in my local .env file I changed:

DATABASE_URL=sqlite:///db.sqlite3

to:

DATABASE_URL=postgres://postgres:password@localhost:5432/bcmlocaldb

Now when I run the local application its using the new database!

Heroku has a lot of nice documentation and the Postgres info can be found here.

Alternative – Connect to live Heroku Postgres Database

To get the applications Postgres URI I just run the $ heroku config command, you will see something along the lines of:

$ heroku config

DATABASE_URL: postgres://your_username:the_password@host.amazonaws.com:5432/database

Now you just replace the local DATABASE_URL .env variable with the info above.

There’s some good documentation going into more detail about connecting from outside Heroku here.