TLDR: EIP-7702 allows externally owned accounts to temporarily behave like smart contract wallets during a transaction, enabling them to use smart contract features like batching and gas sponsorship without permanently changing the account type. BUT Wallet/RPCs control the auth flow not apps and not all wallets have EIP-7702 support.
EIP-7702 Overview
EIP-7702 introduces a new transaction type, 0x04, in Ethereum’s Pectra upgrade that enables externally owned accounts (EOAs) to execute temporary smart contract functionality.
The delegation process looks like:
A Smart Contract should exist that the EOA can designate to
The EOA signs an Authorization to designate the contract to the account
An EIP-7702 transaction is sent along with the Authorization
When processed, the network records that this EOA should delegate to the specific smart contract
The EOA appears to have the SC code attached to it directly
The code will be valid until replaced by another authorization
msg.sender in these transactions remains the EOA’s address
Wallets Are In Control
Applications cannot directly use EIP-7702 to delegate user accounts to their preferred smart accounts.
Internal tests written to use an inject browser wallet as the Viem account throws because it is not possible to sign an authorization over JSON-RPC right now: https://github.com/wevm/viem/discussions/3285
Although its technically possible for any entity to create an EIP-7702 authorization, wallet providers have made it clear that they will reject transactions containing authorization fields from applications. Instead, wallets will manage EIP-7702 authorizations themselves, upgrading their users to the wallet’s chosen smart account implementation. There will be fragmentation as different wallets adopt different implementations.
Worth noting EIP-5792 can be used to query what capabilities a wallet supports using rpc calls (see 7702beat and 7702 checker as examples of existing tools).
In this post, we’ll walk through the lifecycle of a swap in Balancer V3, using the recently released MEV hook as our lens. We’ll trace how a swap flows through the system — from the initial call to the final token transfers — highlighting the key steps and components along the way.
One of the standout strengths of Balancer V3 is how much heavy lifting it does for you. As a developer, you can focus on building your core logic — whether that’s a novel pool type or a specialised hook — without needing to reinvent the plumbing. The core swap flow is already implemented, audited, and optimised. This lets you move fast, stay safe, and build confidently on top of the protocol.
Anatomy of a Balancer V3 Swap
Before diving into the code, it’s useful to understand the main contracts involved in a swap. Balancer V3 separates responsibilities across a few key components, allowing developers to focus only on the parts they need to customise.
The diagram below outlines the four primary contracts involved and their roles in the flow. While the Router and Vault handle most of the core logic — routing, fund management, and safety checks — the Pool and Hook are the only contracts where your custom logic lives.
Primary Contracts Used During Swaps
Digging Into the Swap Flow: Code Highlights
Now that we have an understanding of the contracts involved in a Balancer V3 swap, let’s take a closer look at the code. In this section, we’ll walk through key parts of the swap flow, highlighting how each piece fits into the larger system and where the MEV hook plays a role.
The user initiates the swap by interacting with the protocol through a Router, rather than directly engaging with the Vault. This approach abstracts away some of the complexity, simplifying the user experience. In this case, the Balancer Router contract is used, and the swapSingleTokenExactIn function is called to execute the swap:
User interact with Router
The parameters passed to the swapSingleTokenExactIn function specify key swap details, including the pool, tokens, and the amount to be swapped. At this stage, the amounts are “raw”—they are scaled to the relevant token decimals but do not yet account for any rate adjustments or slippage. For example, if the user wanted to swap 1 USDC, the exactAmountIn would be 1000000 (reflecting the 6 decimal places of USDC).
It’s also important to note that at this point, the Vault is unlocked, meaning that the transient accounting state has been initiated and any subsequent actions must result in balanced accounting.
Inside the Router contract, the actual interaction with the Vault happens within the _swapHook function. This is where the Router forwards the swap request by calling _vault.swap(...), passing along the relevant parameters:
Router calls Vault swap
Inside the Vault’s swap function, the first major step is preparing the data needed to execute the swap. This involves loading the current state of the pool and converting raw token amounts into “live” values that reflect current scaling factors, rates, and fees.
Preparing for the Swap in the Vault
What’s happening here:
poolData: Captures the current state of the pool, including both raw and scaled balances, token rates, and scaling factors.
swapState: Derives token indices, applies scaling to the input amount, and loads the static swap fee for the pool.
poolSwapParams: Bundles all of the data required to compute the swap, with every value already adjusted to use “live” scaling.
At this point, the Vault has everything it needs to perform accurate swap calculations based on real-time conditions.
At this stage in the Vault’s swap function, any beforeSwap hook would typically be executed if configured. However, in the case of the MEV hook, this step is not used.
What is relevant here is the dynamicSwapFee hook, which is called to allow the hook contract to specify a dynamic fee based on the current context:
Calling the Hook to get SwapFee
This check ensures that the pool has a dynamic fee hook configured. If it does, the Vault delegates to the hook via HooksConfigLib, allowing custom logic to influence the swap fee before the main calculation proceeds.
Execution now enters the MevCaptureHook contract. The HooksConfigLib calls the onComputeDynamicSwapFeePercentage function, allowing the hook to define a custom fee based on the current context:
Inside the Hook — Custom Fee Logic
This is where developers can begin implementing their own logic. In the MEV hook, the function does the following:
Checks if MEV tax is enabled — If not, it simply returns a static fee.
Checks if the sender is exempt from fee — If so, it returns a static fee. Note the use of isTrustedRouter here, which ensures the Router is correctly forwarding the original sender. This helps enforce that only legitimate callers are considered for fee exemption.
Calculates the MEV-specific swap fee — This is based on configurable parameters and market context.
The actual fee calculation is handled in the _calculateSwapFeePercentage function. While we won’t dive into every detail of the MEV logic, it’s a great example of how hooks can introduce dynamic, context-aware behaviour into the swap flow. In this case, the hook adjusts the swap fee based on gas conditions to protect users from MEV exploitation.
Dynamic Fee Calculation Logic in the MEV Hook
This function dynamically adjusts the swap fee based on the priority gas price — the difference between the transaction’s gas price and the block’s base fee.
Base Conditions:
If the priority gas price is below the configured threshold (suggesting a retail user), the function returns the standard static fee.
If the maximum MEV fee cap is less than or equal to the static fee, it also returns the static fee.
Dynamic Fee Calculation:
For transactions with a priority gas price above the threshold, the function computes a fee increment.
The increment is calculated as: (priorityGasPrice - threshold) * multiplier / 1e18
This increment is then added to the static swap fee percentage.
Safety Mechanisms:
The function uses OpenZeppelin’s Math.tryMul to guard against overflow.
If an overflow is detected, it gracefully falls back to the maximum allowed MEV fee.
The final result is always capped at the maximum MEV swap fee percentage.
Once calculated, this dynamic fee is returned to the Vault, where it’s used in the next phase of the swap — starting with the fee deduction:
Continuing the swap: Applying the fee
In this example, we’re following an EXACT_IN type swap. That means the swap fee is deducted from the input amount provided by the user. For an EXACT_OUT swap, the fee would instead be taken from the amount calculated.
After applying the swap fee, the Vault calls the pool’s onSwap hook. This is the entry point for pool-specific logic — each pool type (Weighted, Stable, etc.) defines its own math to calculate how much of the output token to return for a given input.
Pool Swap Logic
The onSwap function returns the calculated amount, still in live-scaled form, meaning it reflects the token’s rate and scaling factors.
Once the amount is scaled back to raw units, the Vault checks the limit: it ensures that the amountOut is greater thanthe minimum amount the user was willing to accept (based on the slippage tolerance). If the limit check passes, the Vault moves to the final phase of the swap.
takeDebt/supplyCredit: These functions handle transient accounting for the transaction.
The Vault also deducts protocol and creator fees from the swap.
It then updates the _poolTokenBalances to reflect the tokenIn and tokenOut amounts.
A Swap event is emitted, which is useful for off-chain tracking and monitoring.
For this particular example, the Vault is essentially done here. However, for other hooks that include afterSwap functionality, this logic would be handled before the Vault returns the final amountIn and amountOut values (which are raw scaled).
Once the Vault has completed its duties, we return to the Router, where the final token transfers and accounting are handled in the swapSingleTokenHook function:
Final Token Transfers and Accounting
_takeTokenIn: This function handles the collection of input tokens from the user and the final settlement for transient accounting. It’s designed to work with both standard ERC20 tokens and the special case of ETH/WETH, making it versatile for different token types.
_sendTokenOut: This function manages the delivery of output tokens to the user and the final settlement for transient accounting. Like its counterpart, it supports both ERC20 tokens and ETH/WETH, ensuring consistent token handling regardless of the token type.
At this point, all credit and debt accumulated during the swap execution should be finalized. If transient accounting isn’t correctly settled, the swap would fail at this stage. Assuming everything checks out, the user has successfully swapped their tokens.
Conclusion
Through this walkthrough, we’ve followed the path of a swap in Balancer V3 and seen how the protocol handles the majority of the core logic under the hood. This leaves developers free to focus on the parts that matter most to them, like designing new hooks or customizing pool behavior.
Balancer’s modular architecture means you can build on a solid, audited foundation without needing to understand every detail from scratch — but having a clear picture of what’s happening under the surface can make things easier when you’re building something non-standard, troubleshooting edge cases, or just trying to understand how your code fits into the bigger system.
If you’re ready to start building check out the Balancer V3 developer docs for more detailed references and guides.
Balancer recently launched StableSurge, the first production hook on V3 — an innovative directional fee mechanism that dynamically adjusts swap fees to help protect stable-asset pegs during volatility.
This article explores how Balancer V3 was leveraged to bring StableSurge to life, introduces the tech stack that powers it, and highlights how collaboration across multiple service providers transformed this novel concept into a fully deployed production feature.
Idea To Contract Made Easy
The BLabs Smart Contracts team implemented the code — the StableSurgeHook itself and the associated factory.
The Hooks architecture enables developers to focus on their core functionality without worrying about the complexities of the Balancer Vault, Pools, and other internals — these components simply “just work.”
Beyond development, audits can be faster and more cost-effective since the bounded scope reduces the risk of unintended issues. For example, StableSurge was fully audited in just one week.
The result? A shorter development cycle and faster time to market.
The final step for the SC team after Audits are complete are the production deployments to all networks supported by Balancer. This kicks off the final integration of the off chain components.
Operational Data
Balancer’s data teams focus on two key roles: operations and analysis. Operationally, on-chain data must be accessible in a way that enables consumers, such as the front-end, to utilize it effectively. Balancer achieves this through the Subgraph and its open-source API, run on in-house infrastructure.
Metadata
The Balancer Metadata repo serves as a central repository for storing critical information about pools, hooks, tokens, and more, all of which are utilized by the Balancer front-end. For example, the entry for StableSurge includes a description and deployment addresses, ensuring that the front-end can retrieve and display the correct details.
Subgraph
The Subgraph is a light weight data layer for V3 built by indexing onchain events. To add support for a new Hook/Pool the relevant config, addresses, abis, etc must be added (see StableSurge PR for reference). Any new, important parameters must also be identified and tracked, e.g. for StableSurge the following params were included: amp, maxSurgeFeePercentage, surgeThresholdPercentage.
API
The Balancer API builds on top of the Subgraph, transforming and augmenting data into a more usable format for other layers of the stack, including the SDK and front-end. To support a new Hook or Pool, the hook address must be added to the config, along with any new parameters. Some additional custom work may also be required, such as APR tracking or other specific calculations.
Integrations
Building and deploying the code is just the first step — adoption is what makes it valuable. The Integrations team ensure that a new product developed on Balancer’s platform is usable, accessible, and widely adopted. Packages are provided to make it easier to interact with the smart contracts and to replicate the core maths offchain. New hooks/pools are integrated into the swap router, and the team works closely with external aggregators to drive deeper ecosystem integration.
Balancer Maths
The Balancer maths repo contains reference mathematical implementations, in Javascript and Python, for supported Balancer pools and hooks. When we want to support a new hook type we add an implementation that should match smart contract code 100% (and similarly for a new pool type). You can see an exampe PR for adding the Python implementation of StableSurge here. The final step is to publish the updated NPM package which will be used in the SOR and aggregator integrations.
Smart Order Router
The Balancer Smart Order Router (SOR) identifies optimal swap paths for a given token pair and is accessible through the Balancer API. When a new pool/hook is created this must be integrated into the SOR. Swap results are calculated using the Balancer Maths package so that must be updated and any hook specific parameters must be passed appropriately.
SDK
The Balancer SDK is a Typescript/Javascript library for interfacing with the Balancer protocol. This includes common contract interactions such as adding and removing liquidity. The SDK leverages the API and the SC query functionality which means no changes are needed to support add/remove/swaps for new pool types or hooks (provided they work the Balancer Routers).
To support creation a small update is required whenever a new factory is developed. As StableSurge uses a dedicated factory a PR was made to add this.
Pool Creation
A Pool Creation UI is provided by Balancer to make the creation of new pools easy. Similarly to the SDK an update is required when a new factory is developed. Otherwise a pool can be configured to use a hook as detailed in the docs.
Aggregators
V3 Aggregator Routing
To maximize volume, we aim to expose Balancer V3 liquidity to as many aggregators and solvers as possible. We collaborate closely with these teams, each of whom has their own unique approach. Our efforts include:
Creating detailed pool/hook-specific docs with all necessary. information, such as the StableSurge reference as an example.
Notifying teams of new launches and offering direct support.
Contributing directly where possible, such as the Paraswap PR adding StableSurge support.
Bringing StableSurge to the Front End
StableSurge UI
The design and front-end teams play a crucial role in integrating StableSurge into the user experience, ensuring that all relevant hook information is accessible and intuitive. Their contributions include:
Displaying key information by linking the selected pool, hook, and metadata.
By building on the foundational work of the backend teams, the front-end and design teams ensure that StableSurge is not only functional but also user-friendly and informative.
Partnerships and Launch
In parallel with the technical development, the BizDev team has been actively identifying and collaborating with partners to prepare for launch. Partners who benefit from fee surging are naturally interested in improving pool performance for their liquidity providers while enhancing peg stability, making the value proposition clear.
The launch plan centered around key partners with an appetite for innovation and an interest in this particular product, including Aave, Treehouse, USDX, and emerging LST projects like Inception’s inwstETH and Loop’s slpETH. From an operations perspective, the Balancer Maxis team supported partners in creating and seeding pools, ensuring a smooth onboarding process.
A particularly strong collaboration emerged with Aave, where integrating GHO into a StableSurge pool with boosted features provided a comprehensive liquidity solution. Shortly after launch, the GHO/USDC Base pool quickly scaled to over $5 million TVL.
With the launch ongoing, data is being collected to fine-tune optimal surge thresholds, max fee settings, and other parameters like base fees and amplification. The surging mechanism enables a high-efficiency zone near the peg, while also acting as a backstop during volatility.
Next steps include:
Further optimizing hook settings based on real-world data.
Onboarding more stablecoins and ETH-based liquidity.
Expanding to BTC-correlated pairs while integrating boosting via rehypothecation.
Data Analysis
V3 Hooks Dashboard
The data team has developed a V3 Hooks dashboard to showcase curated hooks, featuring tailored visuals and key metrics that highlight the unique aspects of each hook. Meanwhile, other Balancer dashboards track overall key metrics and volume across the ecosystem.
Building on Balancer V3 comes with a wide range of benefits, making contract development the primary focus for builders. The data layer, integrations, and front-end support are largely handled by Balancer’s infrastructure, reducing the overhead of building a complete ecosystem around a new feature.
With deep integrations into aggregators to drive volume, robust data tooling, and a well-supported front-end, developers can spend less time on infrastructure and more time innovating. Whether designing new hooks, optimizing swap mechanics, or experimenting with novel liquidity strategies, Balancer V3 provides a powerful, streamlined foundation to bring ideas to life and we’d love to help.
Axiom is a really exciting new protocol that harnesses ZK technology to allow smart contracts to trustlessly compute over the history of Ethereum. I believe its a novel new primitive for others to build with. The docs provide a lot of info about the protocol itself and has a helpful tutorial that can be followed to build an Autonomous Airdrop. An SDK is provided to improve the integration experience for developers and includes a CLI, React client and Typescript and Smart Contract libraries.
One of the SC libraries provides an extension to the standard Foundry test library and has a pretty interesting setup and implementations of custom cheat codes. I thought it would be interesting to investigate this a bit further using the test from the Autonomous Airdrop example as a reference example, specifically looking at AxiomTest in some more detail.
System Overview
To appreciate why the cheat codes are beneficial its useful to have a high level overview of the Axiom system. Following the flow of the Airdrop example:
Calls the callback specified by the AxiomV2Callback in step 1
Callback runs
This allows a custom contract to make use of the results of the query and run custom logic
In the Airdrop example the AutonomousAirdrop.sol contract validates the relevant airdrop requirements and issues the token if met
When testing locally the QueryFulfillment in step 3 will not be possible which would block testing of the custom logic implemented in the callback used in step 4. That’s where the AxiomTest library can be used.
Step By Step Testing
Following AutonomousAirdrop.t.sol can show us step by step how to use AxiomTest and allows us to investigate what is going on.
Importing
AxiomTest follows the same convention as a usual Foundry Test but instead we import AxiomTest.sol and inherit from AxiomTest in the test contract:
import { AxiomTest, AxiomVm } from "@axiom-crypto/v2-periphery/test/AxiomTest.sol";
contract AutonomousAirdropTest is AxiomTest { ...
Setup
setUp() is also the same as Foundry, an optional function invoked before each test case is run. Here there’s a bit more going on:
function setUp() public {
_createSelectForkAndSetupAxiom("sepolia", 5_103_100);
inputPath = "app/axiom/data/inputs.json";
querySchema = axiomVm.compile("app/axiom/swapEvent.circuit.ts", inputPath);
autonomousAirdrop = new AutonomousAirdrop(axiomV2QueryAddress, uint64(block.chainid), querySchema);
uselessToken = new UselessToken(address(autonomousAirdrop));
autonomousAirdrop.updateAirdropToken(address(uselessToken));
}
Setup and run a new local fork using vm.createSelectFork(urlOrAlias, forkBlock)docs;
Using provided chainId find the addresses for axiomV2Core and axiomV2Query from local AxiomV2Addresses. These are actual deployments and currently only exist on mainnet/sepolia.
Initialise core and query contracts using the addresses and interfaces:
axiomVm = new AxiomVm(axiomV2QueryAddress, urlOrAlias, true);
AxiomVm.sol implements the cheatcode functionality as well as providing utility functions for compiling, proving, parsing args, etc.
Following initialisation of the fork, the axiomVm compile function is used to compile the local circuit and retrieve the querySchema associated to the circuit. The querySchema provides a unique identifier for a callback function to distinguish the type of compute query used to generate the query results passed to the callback and this is used as a constructor argument when creating a new AutonomousAirdrop contract.
Behind the scenes compile is using Foundry FFI to run the Axiom CLI compile command:
Finally sendQuery itself is called on the axiomV2Query contract initialised during setup using the parsed args.
Testing Callback
The test test_axiomCallback mocks step 3 in the System Overview and allows the callback to be tested.
function test_axiomCallback() public {
AxiomVm.AxiomFulfillCallbackArgs memory args =
axiomVm.fulfillCallbackArgs(inputPath, address(autonomousAirdrop), callbackExtraData, feeData, SWAP_SENDER_ADDR);
axiomVm.prankCallback(args);
}
Similar to the previous test fulfillCallbackArgs uses the Axiom CLI to prove and queryParams to generate the required args for AxiomFulfillCallbackArgs. These are used in prankCallback to call the axiomV2Callback function on the AutonomousAirdrop contract (args.callbackTarget is the address) with the relevant spoofed Axiom results:
The axiomV2Callback function is inhertied from the AxiomV2Client and this function in turn calls _validateAxiomV2Call and _axiomV2Callback.
Conclusion
Following through these tests and libraries really helps to understand the moving parts in the Axiom system and hopefully the post helps others. Its exciting to see what gets built with Axiom as it becomes another core primitive!
Lately at Balancer we’ve moved from the Truffle development environment to using Buidler, Waffle and Ethers. The main benefit is being able to use console.log in Solidity during debugging – it’s amazing how much of a difference this makes and for this alone the change over is worth it. Here’s some notes I made during the switch over.
Ethers
The ethers.js library aims to be a complete and compact library for interacting with the Ethereum Blockchain and its ecosystem.
The following gist demonstrates some basic usage of Ethers that creates an instance of a deployed contract and then running some calls against it:
Buidler & Waffle
Buidler is described as a ‘task runner’. I think its easiest to see it as a swap for Truffle/Ganache. It has lots of different plugins that make it really useful and its documentation was refreshingly good.
The Quickstart shows you how to install and how to run common tasks. It also uses Waffle for testing. Waffle is a simple smart contract testing library built on top of Ethers.js. Tests in Waffle are written using Mocha alongside with Chai and from my experience everything just worked. The docs are here. And its worth digging in to see some of the useful things it offers such as Chai Matchers which allow you to test things like reverts, events, etc.
Buidler commands I found I used a lot:
Run the local Buidler EVM: $ npx buidler node
Compile project contracts: $ npx buidler compile
Run tests: $ npx buidler test ./test/testfile.ts
Here’s an example test file I used that demonstrates a few useful things:
Static Calls
let poolAddr = await factory.callStatic.newBPool(); – The contract callStatic pretends that a call is not state-changing and returns the result. This does not actually change any state and is free.
Connecting Different Accounts
await _pools[1].connect(newUserSigner).approve(PROXY, MAX); – Using contract connect(signer) calls the contract via the signer specified.
Setting the gasPrice to 0 like above allows me to run the transaction without spending any Eth on it. This was useful when checking Eth balance changes without having to worry about gas costs.
I needed the test accounts to have more than the 1000Eth balance set by default. In buidler.config.ts you can add accounts with custom balances like above.
Deploying
Deploying is done using scripts. First I updated my buidler.config.ts with the account/key for Kovan that will be used to deploy (i.e. must have Eth):
async function main() {
// We get the contract to deploy
const ExchangeProxy = await ethers.getContractFactory("ExchangeProxy");
const WETH = '0xd0A1E359811322d97991E03f863a0C30C2cF029C';
const exchangeProxy = await ExchangeProxy.deploy(WETH);
await exchangeProxy.deployed();
console.log("Proxy deployed to:", exchangeProxy.address);
}
main()
.then(() => process.exit(0))
.catch(error => {
console.error(error);
process.exit(1);
});
Then run this using: npx buidler run --network kovan deploy-script.js
🎉 Console Logging 🎉
One of the holy grails of Solidity development and so easy to setup in this case! There are also Solidity stack traces and error messages but unfortunately there was a bug that caused this not to work for our contracts.
To get this going all you need to do is add: import "@nomiclabs/buidler/console.sol"; at the top of your contract then use console.log. More details on what kind of outputs, etc it supports are here. Lifesaver!
Hope some of this was helpful and you enjoy using it as much as me.
In April I entered (and won!) the NuCypher+CoinList hackathon. I didn’t actually know much about the NuCypher tech before I got started but once I had built my DApp it was clear this is really interesting stuff and it’s stuck with me ever since as something interesting to build on.
Proxy Re-encryption
The NuCypher solution will eventually provide a decentralised privacy infrastructure but during the hackathon I was mainly making use of a subset of the tech, Proxy Re-encryption.
Proxy re-encryption is a set of encryption algorithms that allow you to transform encrypted data. Specifically… it allows you to re-encrypt data — so you have data that’s encrypted under one set of keys, you can re-encrypt the data without de-encrypting it first, so that now it’s encrypted under a second, different set of keys —NuCypher co-founder MacLane Wilkison
So What?
To understand why this is pretty awesome imagine I have some encrypted data I want to share with Bob, what are the options to do this?
Crazy way – I just give me private encryption key to Bob (who I’m sharing the data with) who can use it to decrypt the data. But now Bob has my key and who knows where this ends up.
Inefficient way – I decrypt the encrypted data then rencrypt it using Bobs public key. This is more secure for sure but I have to do a lot more work. What if I have to do this many times? What if the encrypted data is stored and accessed over a network? Hows the information all being shared? Intensive!
How about the Proxy Re-encryption way:
With Proxy Re-encryption I encrypt the data once.
The encrypted data can be stored anywhere — Amazon, Dropbox, IPFS, etc. I only need to upload it once and provide access to the Proxy service (eventually this will be a NuCypher decentralised service)
The Proxy can rencrypt the data for anyone else I choose (provided I have their public key) efficiently and without ever having access to the decrypted data.
Bob decrypts the data using his own key and resources.
If the data I’m sharing is a stream, i.e. a Twitter feed, then I can enable/revoke decryption access whenever I want — i.e. I can stop someone seeing the data.
NuCypher will eventually provide a decentralised privacy infrastructure which will replace a centralized proxy with a decentralized network. A really good overview of the NuCypher solution is here.
Combine all this with decentralised smart contract as a source of access — very cool!
My DApp was innspired by Simon de la Rouvieres This Artwork Is Always On Sale where he implements a Harberger Tax on the ownership of a digital artwork. In my app, instead of an artwork, access to a feed of data is always for sale. NuCypher is used to encrypt the data and only the current Patron can decrypt (using NuCypher) to get access. Anyone can buy this access from the current Patron for the sale price set when they took ownership. Whilst they hold ownership they pay a 5% fee to the feed owner. In the demo app the data is a Twitter like feed but the concept could be extended to have more than one Patron and could also be used for other kinds of feed data such as sensor data, camera/video feeds, music, etc.
I was super happy to get a mention in Token Economy as Stefanos favourite entry!
Vyper is a contract-oriented, pythonic programming language that targets the Ethereum Virtual Machine (EVM)
Vyper is a relatively new language that has been written with a focus on security, simplicity and audibility. It’s written in a Pythonic way which appeals to me and as a more secure alternative to Solidity I think it has a lot of potential. I plan on writing more about working with Vyper in the future.
Truffle — Too Much Of A Sweet Tooth?
I’ve recently finished working on a hackathon project and completed the 2018 ConsenSys Academy and during that time, for better or worse, I’ve become pretty accustomed to using the Truffle development environment for writing code, testing and deploying— it just makes life easier.
So, in an ideal world I’d like to use Truffle for working with Vyper. After a bit of investigation I found this ERC721 Vyper implementation by Maurelian who did the work to make it Truffle compatible. I thought it might be useful to document the build process for use in other projects.
How To — Vyper Development Using Truffle
Install Vyper
The first step is to make sure Vyper is installed locally. If this has been done before you can skip — you can check by running the $ vyper -h command. There are various ways to install, including using PIP, the docs are here. I’m using a Mac and did the following:
Next I installed Truper, a tool written by Maurelian to compile Vyper contracts to Truffle compatible artifacts. It uses Vyper which is why we installed it previously. (See the next section for details of what it’s doing). To install run:
$ npm i -g truper
Compiling, Testing, Deploying
From your project dir (you can clone the ERC-721 project for a quick test).
Run ganache test network:
$ ganache-cli
Compile any Solidity contracts as usual using:
$ truffle compile
Compile Vyper contracts using the command:
$ truper
* this must be called from the project dir and you must have the virtual environment you built Vyper in running.
Truffle tests can be written and run the usual way, i.e.:
Use artifacts in test files:
const NFToken = artifacts.require('NFToken.vyper');
Run tests using:
$ truffle test
Truffle migrations also work the usual way. For example I used the following migration file to deploy to ganache:
Truper uses Vyper which is why we installed it in the first step. If we look at https://github.com/maurelian/truper/blob/master/index.js we can see Truper is creating Truffle artifact files for each Vyper contract and writing them to the ./build/contracts folder of the project.
Truffle Artifact Files
These *.json files contain descriptions of their respective smart contracts. The description includes:
Contract name
Contract ABI (Application Binary Interface — a list of all the functions in the smart contracts along with their parameters and return values). Created by Truper using: $ vyper -f json file.vy
Contract bytecode (compiled contract data). Created by Truper using: $ vyper -f bytecode file.vy
Contract deployed bytecode (the latest version of the bytecode which was deployed to the blockchain). Created by Truper using: $ vyper -f bytecode_runtime file.vy
The compiler version with which the contract was last compiled. (Doesn’t appear to get added until deployed.)
A list of networks onto which the contract has been deployed and the address of the contract on each of those networks. (Doesn’t appear to get added until deployed.)
Maurelian describes it as a hacky stop-gap but it works so thank you!
Well that’s been a fun and productive couple of months!
ConsenSys Academy 2018
I’m now officially a ConsenSys certified dApp Developer 👊! (Certificate apparently on its way)
The ConsenSys Developer course was definitely worthwhile. I covered a lot of Blockchain theory while following the course lectures and taking the quizzes. The real learning and fun came from the final project where I actually had to build something.
ConSensys Academy Final Project
My final project was a bounty DApp that allows anyone to upload a picture of an item they want identified along with an associated bounty in Eth for the best answer. I got a lot of experience using the various parts of the Web3 technology stack. I used Truffle for development/testing, IPFS for storing the pictures and data (was cool to use this, very powerful idea), uPort for identity, OpenZeppelin libraries (which are really useful) an upgradeable design pattern, deployment to Rinkeby and lots of practice securing and testing smart contracts.
Colony Hackathon Winner
I also managed to bag myself a prize in the Colony Hackathon for my decentralised issue reporting app. I got the Creativity Honorable Mention which was pretty cool and I used my winnings to buy a Devcon IV ticket ✈️ 🤘!!
The Learnings
I came across a few things that I wanted to do while I was #BUIDLING but couldn’t easily find the info on so I’ve been keeping a kind of cheat sheet. Hopefully it might help someone else out there.
The last few months I’ve confirmed to myself that the Blockchain/Ethereum world is something I want to be involved in. There’s so many different, exciting areas to investigate further, now I just have to chose one and dive further down the rabbit hole!
I’ve been working on a rock, paper, scissors Ethereum DApp using Solidity, Web3 and the Truffle framework. I hit a few difficulties trying to replicate functionality that would normally be trivial in a non blockchain world so I thought I’d share what I learned.
My first thoughts for the DApp was to display a list of existing games that people had created. Normally if I were doing something like this in Django I’d create a game model and save any new games in the database. To display a list of existing games on the front end I’d query the db and iterate over the returned collection. (I realise storage is expensive when using the Ethereum blockchain but I thought trying to replicate this functionality would make sense and would be a good place to start.)
Solidity
Structures
While investigating the various data types that could be used I found the Typing and Your Contracts Storage page from Ethereum useful. I settled on using a struct, a grouping of variables, stored under one reference.
That handles one game but I want to store all games. I attempted to do this in a number of different ways but settled on mapping using the games index as the key. Every time a new game is added the index is incremented so I also use gameCount to keep count of the total games.
I also added a function that returns the total number of games:
function GetGamesLength() public returns (uint){
return gameCount;
}
Returning A Structure
Next I want to be able to get information about a game using it’s index. In Solidity a structure can only be returned by a function from an internal call so for the front end to get the data I had to find another way. I went with the suggestion here — return the fields of the struct as separate return variables.
function GetGame(uint Index) public returns (string, bool, address, uint, uint) {
return (games[Index].name, games[Index].isFinished, games[Index].ownerAddress, games[Index].stake, games[Index].index);
}
Front End
On the front end I use Web3 to iterate over each game and display it. To begin I call the GetGamesLength() function. As we saw previously this gives the total number of games. Then I can iterate the index from 0->NoGames to get the data for each game using the GetGame(uint Index) function.
The getAllGames function calls GetGame(uint Index) for each game. To do this I created a sequence of promises using the method described here:
getAllGames: function(NoGames, Instance){
var sequence = Promise.resolve()
for (var i=0; i < NoGames; i++){(function(){
var capturedindex = i
sequence = sequence.then(function(){
return Instance.GetGame.call(capturedindex);
}).then(function(Game){
console.log(Game + ' fetched!'
// Do something with game data.
console.log(Game[0]); // Name
console.log(Game[1]); // isFinished
}).catch(function(err){
console.log('Error loading ' + err)
})
}())
}
}
Conclusion
Looking back at this now it’s all pretty easy looking but it took me a while to get there! I’m still not even sure if it’s the best way to do it. Any advice would be awesome and if it helps someone even better.
Recently I’ve been using Python and Cartopy to plot some Latitude/Longitude data on a map. Initially it took some time to figure out how to get it to work so I thought I’d share my code incase it was useful.
“a Python package designed to make drawing maps for data analysis and visualisation as easy as possible.”
I’m not sure how active the project is and I found the documentation a bit lacking but once I was up and running it was pretty easy to use and I think the results look pretty good.
Plotting My Data
I have a csv file with various data timestamped and saved on each line. For this case I was interested in the lat/lng location, signal strength (for an antenna) and also a satellite number. An example of one line of data is:
lat/lng position: 57.008263,-5.827861
signal strength: 1.63
satellite number: 310.00
Initially for each lat/lng position I wanted to plot the point on a map and colour the marker at that point to show which satellite number it was. Also if the signal strength was -100 the marker colour should be shown as red. An example taken from some of the data is shown below.
Lat/Lng Plots with different zoom level
The following Gist shows the Python script I used:
Script Details
Most of the script is actually concerned with reading the file and parsing the relevant data. The main plotting functionality is in the section:
The projection sets the coordinate system and is selected from the Cartopy projection list (there’s a lot to pick from and I chose the one I thought looked the best).
Next a coastline is added to the projection. As I was focusing on a small section of Scottish coastline I went with the 10m resolution which is the highest but lower resolutions can be selected as detailed in the documentation.
Finally a scatter plot is created. The data has been parsed into equal sized lists of longitude and latitude points.
The ‘s’ parameter defines the size of the marker at each point, in this case all set to 1pt radii.
The ‘c’ parameter defines the colour of the marker, in this case blue for satellite 310, green for 60, yellow for 302, black for any other satellite and red if signal strength is -100.
Finally the transform=ccrs.Geodetic() sets the lat/lng coordinate system as defined here.
Scaling Marker Size
It’s also possible to adjust the radius of the marker at each point. To scale it relative to the signal strength (I removed the -100 strengths):