What are Aura, Babe, POW, and Grandpa in Substrate/Polkadot?
What are Aura, Babe, Grandpa, and POW in Substrate/Polkadot?
There are different ways that a blockchain network can come to a consensus about changes to the chain. Similarly, Aura, Babe, Grandpa, and Proof Of Work(POW) are all different types of Consensus mechanisms in Substrate/Polkadot. These consensus layers are designed such that they can be easily changed during development and could even be hot-swapped after the chain goes live!
Let’s talk about all these in detail.
AURA(Authority Round)
Aura primarily provides block authority. In Aura, a known set of authorities are allowed to produce blocks. The authorities must be chosen before block production begins and all authorities must know the entire authority set. While for the production of blocks time is divided into “slots” of a fixed length. During each slot, one block is produced, and authorities take turns producing blocks in order forever.
In Aura, forks only happen when it takes longer than the slot duration for a block to traverse the network. Thus forks are uncommon in good network conditions.
BABE(Blind Assignment for Blockchain Extension)
Babe also primarily provides block authoring. It’s a slot-based consensus mechanism with a known set of validators, similar to Aura. Furthermore, each validator is given a weight, which must be assigned before block production can begin. The authorities, unlike Aura, do not follow a strict order. Instead, each authority uses a VRF to create a pseudorandom number throughout each round. They are allowed to make a block if the random number is less than their weight.
Forks are more prevalent in Babe than in Aura, and they happen even in ideal network conditions, because many validators may be able to create a block during the same slot.
Substrate’s implementation of Babe also has a fallback mechanism for when no authorities are chosen in a given slot.
Proof-Of-Work(POW)
It also provides block authoring unlike BABE and Aura, it is not slot-based and does not have a known authoring set. In Proof Of Work, anyone can produce a block at any time, so long as they can solve a computationally challenging problem(typically a hash preimage search). The difficulty of this problem can be tuned to provide a statistical target block time.
GRANDPA(GHOST-based Recursive ANcestor Deriving Prefix Agreement)
It Provides block finalization. It has a known weighted authority set like BABE. However, GRANDPA doesn’t author blocks. It Just listens to gossip about blocks that have been produced by some authoring engine like the three mentioned above.
It works in a partially synchronous network model as long as 2/3 of nodes are honest and can cope with 1/5 Byzantine nodes in an asynchronous setting.
GRANDPA distinguishes itself by reaching agreements on chains rather than blocks, which speeds up the finalization process significantly, even after long-term network partitioning or other networking difficulties.
Each authoring participates in two rounds of voting on blocks. Once 2/3 of the GRANDPA authorities have voted for a particular block, it is considered finalized.
What is Decentralized Identity?
What is Decentralized Identity?
Decentralized Identity is an emerging concept, in which control is given to the consumers through the use of an identity wallet, through which they collect verified information about themselves from certified issuers.
In this article, we’ll be looking at DIDs — what they are, DID documents, Verifiable data, and how they work.
I’d also try to explain why we use DIDs, and what problems they propose to solve.
The problem
Secrets such as passwords, and encryption keys, are used to assist in protecting access to resources such as computing devices, customer data, and other information. Unauthorized access to resources can cause significant disruption and/or negative consequences. Many solutions have definitely been proposed to protect these secrets and in turn, protect the security and privacy of software systems. Each of these solutions, according to research by Zakwan Jaroucheh, follows the same approach, where, once the consumer receives the secret, it can be leaked and be used by any malicious actor. Time and time again, we’ve heard cases of compromised private information, leading to the loss of billions of dollars.
How then can we decentralize secret management, such that the secret won’t have to be sent to the consumer? I guess I can say… This is where DIDs come in.
First, let’s define Identity.
Identity is the fact of being who or what a person or thing is defined by unique characteristics. An identifier on the other hand is a piece of information that points to a particular identity. It could be named, date of birth, address, email address, etc.
A decentralized identifier is an address on the internet that someone, referred to as Subject, which could be you, a company, a device, a data model, thing, can own and direct control. It can be used to find a DID document connected to it, which provides extra information for verifying the signatures of that subject. The subject (which may be you) can update or remove the information on the DID document directly.
For instance, if you’re on Twitter, you likely own a username, take a DID as your username on Twitter. However, in the case of a DID, the username is randomly generated. Other information about you is accessible through your username (DID document), and you have the ability to update this information over time.
Each DID has a prefix that it references, called DID Method. This prefix makes it easy to identify its origin or where to use it for fetching DID documents. For instance, a DID from the Sovrin network begins with did:sov while one from Ethereum begins with did:ethr. Find the list of registered DID prefixes here.
Let’s briefly look at some of the concepts you’ll likely come across when learning about DIDs.
DID Document
In a nutshell, a DID document is a set of data that describes a Decentralized Identifier. According to JSPWiki, A DID Document is a set of data that represents a Decentralized Identifier, including mechanisms, such as Public Keys and pseudonymous biometrics, that can be used by an entity to authenticate itself as the W3C Decentralized Identifiers. Additional characteristics or claims describing the entity may also be included in a DID Document.
DID Method
According to W3C, a DID method is defined by a DID method specification, which specifies the precise operations by which DIDs and DID documents are created, resolved, updated, and deactivated. The associated DID document is returned when a DID is resolved using a DID Method.
Verifiable Credentials
When you hear of verifiable credentials (VCs), what comes to mind? Probably your passport, license, certifications, and any other identification you might have.
This has to do with the physical world. Digitally, if someone wants to verify or examine your identity how can they do this? A verifiable credential in the simplest term is a tamper-proof credential that can be verified cryptographically.
A verifiable credential ecosystem consists of three entities:
- The Issuer
- The Holder
- The Verifier
The entity issuing the credential is known as the issuer; the entity for whom the credential is issued is known as the holder, and the entity determining whether the credential satisfies the requirements for a VC is known as the verifier.
For example, say a school certifies that a particular individual has taken the degree exams and this information is verified by a machine for its authenticity.
Here, the issuer is the school, the holder is the individual who has taken the exam, and a verifier is a machine that checks the verifiable presentation for its authenticity. Once verified, the holder is free to share it with anyone he/she wishes.
I hope you’re able to get it up to this point.
Let’s take a dive into some of the reasons for decentralized identity.
Following his critique of web 3.0, Jack Dorsey, the former CEO of Twitter, introduced the web 5.0 initiative. By claiming that ownership is still a myth since venture capitalists and limited partnerships will take on a sizable chunk of the web, Dorsey highlighted the current constraints in web 3.0. He claimed that web 3.0 would keep a lot of things centralized, necessitating the creation of web 5.0.
One of the prime use cases for web 5 is empowering users with control of their identity, which we all refer to as Decentralized Identity, used interchangeably with Self-Sovereign Identity (SSI. It is an approach to digital identity that gives individuals control of their digital identities. Why did Jack introduce web 5? Why do more people want to take back control of their data through decentralization and blockchain? What benefits does this hold for people and organizations?
Benefits of decentralized identity for Organizations
- Decentralized Identities allow organizations to verify information instantly without having to contact the issuing party, like a driver’s licensing organization or university, to ensure that IDs, certificates, or documents are valid. It takes a lot of time, sometimes, weeks and months to manually very credentials, which slows down recruitment and processing times while using a lot of financial and human resources. By scanning a QR code or putting it through a credential validator tool, we can quickly and easily validate someone’s credentials with DIDs. Here is a typical example of how a company can leverage decentralized identity technology to hire efficiently:
* Anita, a job applicant, manages her decentralized identity and Verifiable Credentials on her phone with a Wallet and wants to apply for the company looking for a community manager. * She attended a boot camp that gave her a community management degree that she keeps in her digital wallet as a Verifiable Credential that can't be faked. * The company makes a job offer and they just need to check that her certificate is authentic. * The company requests her data and she is prompted on her phone to give authorization to the company to show her certificate * The company receives a QR code and simply scans it to instantly confirm that her community management certificate is authentic. * They offer Anita the job.
The traditional, manual verification process would have taken several weeks or months to achieve the same outcome.
- DIDs enable issuing organizations to conveniently provide Verifiable Credentials to people and prevent fraud which in turn, greatly reduces costs and increases efficiency. Many people, even in positions with a lot of risks, use forged or fraudulent certificates to apply for jobs. A university can issue fraud-proof credentials, which the recruiting organizations can easily verify, thereby reducing the possibility of forgery.
Benefits of decentralized identity for Individuals
- Decentralized identity increases individual control of identifying information. Without relying on centralized authority and third-party services, decentralized IDs and attestations can be validated.
- People can choose the details they want to share with particular entities, including the government or their employment.
- Decentralized identity makes identity data portable. Users can exchange attestations and IDs with anybody they choose by storing them in their mobile wallets. Decentralized identities and attestations are not stored in the issuing organization’s database permanently. Assume that someone called Anita has a digital wallet that helps her to manage authorizations, IDs, and data for connecting to different applications. Anita can use the wallet to enter her sign-in credentials with a decentralized social media app. She wouldn’t need to worry about making a profile because the app already recognizes her as Anita. Her interactions with the app will be stored on a decentralized web node. What Anita can do now is, switch to other social media apps, with the social persona she created on the present social media app.
- Decentralized identity enables anti-Sybil mechanisms to identify when one individual human is pretending to be multiple humans to game or spam some system. I t frequently becomes impractical to log in several times without the system noticing a duplicate as the user will need to use identical credentials each time.
Conclusion
Decentralized Identity has a lot of pros, and so many individuals and organizations are already keying into it. A lot of companies like Spruce ID, Veramo, Sovrin, Unum ID, Atos, etc have worked hard to create decentralized identity solutions. I hope to see where these efforts lead and look forward to seeing DIDs become more used in a bunch of applications as well.
For further reading, feel free to check out these resources
- https://identity.foundation/faq/
- https://www.gsma.com/identity/decentralised-identity
- https://venturebeat.com/2022/03/05/decentralized-identity-using-blockchain/
From the article of Amarachi Emmanuela Azubuike
Understanding Basic Substrate Code
Understanding Basic Substrate Code
- ChainSpec.rs: The chainspec file is responsible for the initial fundamental root of the chain. From there we can do the following things:
- We can add or manage accounts that we will get during the starting of the chain.
- How We can generate a new account.
- We can pre-fund the account. So that we can make transactions on the chain.
- We can add any other prerequisite related to any pallet if the pallet needs it.
- Basically, we can do everything which we need on the chain during starting(Block 1) of the chain.
- Cli.rs: The cli file is responsible for all the customization of commands to interact with the chain. Like
- How the block will generate instant-seal, manual, or default.
- How we can build the spec for the chain, purge the chain, and many more.
- Basically whatever we can do with the chain by the help of commands.
- Command.rs: It is the extended or helper file of the cli one.
- Main.rs: It is the main file just for instantiation purposes of cli.
- Rpc.rs: The RPC file is responsible for all the methods or customization related to RPC(Remote Procedure Call). Like
- How the block will generate instant-seal, manual, or default.
- How we can build the spec for the chain, purge the chain, and many more.
- Basically whatever we can do with the chain by the help of commands.
- Service.rs: The service file is responsible for the main business logic for the block generation. Like
- Consensus protocol.
- How the block will generate differently for the different commands like manual, or instant seal.
- How the chain will interact between nodes.
- And all the things related to the database, telemetry, RPC, finality, and much more. It basically takes the configuration from the Runtime module which we do with respect to any pallet.
- Runtime: The Runtime is responsible for the coupling and config customizations of the pallets. It has the lib.rs which has the main code and also the benchmarking and other files also come in it.
- Lib.rs: This is the file where we can do the coupling and customization. Let’s understand one by one:
- Firstly it has all the basic configurations related to runtime like chain name, chain version, etc.
- We need to customize/implement all the pallets we are using in the chain. Like frame_system, grandpa, aura, timestamp, sudo, etc
- We create the chain runtime by adding all the pallets into it.
construct_runtime!( pub enum Runtime where Block = Block, NodeBlock = opaque::Block, UncheckedExtrinsic = UncheckedExtrinsic { System: frame_system, RandomnessCollectiveFlip: pallet_randomness_collective_flip, Timestamp: pallet_timestamp, Aura: pallet_aura, Grandpa: pallet_grandpa, Balances: pallet_balances, TransactionPayment: pallet_transaction_payment, Sudo: pallet_sudo, } ); - Lib.rs: This is the file where we can do the coupling and customization. Let’s understand one by one:
- We can also add more business logic respective to any pallet or node.
- We also implement the different traits for different transactions on the node. Like execute the block, finalize block, validate transactions, etc.
- Type 1: The changes we covered on the service .rs
- Type 2: The changes which we can make from runtime.
- We can update the time of the block generation from the configuration of the timestamp pallet. Same as we can do more customizations with the help of other pallets.
- We can customize the block in the transaction pool.
- We can add the conditions/validations in the different block transaction methods like execute the block, finalize block, etc.
- Above changes, we can do in the implementation of the traits related to it. Along with that if we want to customize the block we can look at the other methods in the traits of the executive file(link).
Blockchain Jargons
Blockchain Jargons
As a blockchain developer, you must learn about different technologies and trends that have dominated the blockchain space. This blog post will walk you through sidechains, atomic transactions, atomic swaps, non-custodial wallets, Layer 2 blockchain solutions, IDOs, and Turing Complete smart contracts.
Make sure to understand each concept because many blockchain companies expect blockchain developers to understand the concepts that have shaped the blockchain space.
Let’s look at the first concept, which is atomic transactions.
Atomic Transactions and Why Use Them?
An atomic transaction allows you to group multiple transactions to be submitted at one time. If any of the transactions in this group fails, then all other transactions in this group fail as well. In other words, an atomic transaction guarantees that all transactions succeed or fail.
Now you might wonder why developers would want this functionality? For instance, this is particularly useful for decentralized exchanges. When both parties agree to exchange assets, they both create a transaction and group it as an atomic transaction. Therefore, if one transaction fails, an atomic transaction guarantees that the other transaction will also fail. This characteristic is important because you don’t want to send an asset to an unknown person without receiving the promised asset in return.
Atomic Swap
Smart contracts facilitate atomic swaps, which allow users to exchange assets without using a centralized exchange. Most often, two different blockchain platforms would use an atomic swap contract to make their tokens interoperable.
These contracts would use Hash Timelock Contracts (HTLC), which are time-bound smart contracts that require the involved parties to confirm the transaction within a specified timeframe. If this happens, the transaction succeeds, and the contract exchanges the funds. Thus, it allows users to safely exchange tokens without using a centralized exchange.
However, this technology has become less popular as it comes with some disadvantages, such as the slow trading speed and few people using these services, causing price slippage.
Fun side note: The first atomic swap ever was conducted between Decred and Litecoin in September 2017.
Non-custodial Wallets
A non-custodial wallet gives you full control over your funds because you own the key pair for your wallet. In other words, only you know the private key to unlock your wallet. However, when you lose the private key to your wallet, it means you lose full access to your wallet. Therefore, it’s important to store a backup key for your wallet somewhere safe if you forget or lose your private key. Unfortunately, you’re not the first person to lose the private key to its wallet.
Most people prefer to use custodial wallets hosted by exchanges. Often, these are shared wallets where exchanges store the funds for multiple users. They track each user’s balance in a centralized database. Of course, it’s much easier to access your wallet or retrieve access when you’ve lost it. However, you sacrifice security over the speed of use.
For people who trade on a daily basis, it makes sense to keep their funds in a centralized exchange. Just make sure to know the difference between both.
Sidechains – What is a sidechain?
Many blockchain platforms are trying to implement or have implemented sidechain technology to improve blockchain scalability. Blockchains have to meet an ever-growing demand for scalability. We can increase the block size to store more transactions per block. Unfortunately, that’s not an ideal solution as it requires more processing power to verify these blocks and broadcast them in a timely manner.
Therefore, sidechains allow for faster scaling without altering any of the properties of the underlying blockchain. A sidechain is a separate blockchain that is pegged against the mainchain. This means that both chains are interoperable. It allows assets from the sidechain to move to the mainchain and vice versa.
Using a sidechain, users can transact assets on this chain without congesting the mainchain. On top of that, there’s no requirement for mainchains to store each transaction that happened on the sidechain. Users can transact multiple times, and the mainchain will only store the final balance on its chain when the user decides to swap their assets to the mainchain again.
Layer 2 Blockchain: State Channels and Sidechains
We all know that blockchain technology is in search of scalability. Sidechains have proven to be a great solution. However, more and more developers started building Layer 2 scaling solutions. When we refer to Layer 2 scaling solutions, we talk about solutions built on top of an existing blockchain platform like Ethereum.
State channels are an example of a Layer 2 solution that allows users to execute multiple transactions but only record a single transaction. For instance, you agree with a person to pay $10 each day for an entire month. In this situation, you would pay 30 times a transaction fee and congest the network.
Therefore, you can use a state channel where you agree to keep track of all the separate transactions yourself and combine all payments at the end of the month. Instead of paying 30 times a transaction fee, you’ve only paid a transaction fee once.
Note that sidechains are also an example of Layer 2 scaling solutions. If you would like to learn more about Layer 2 solutions, check out the Plasma solution, where designated individuals make sure to move transactions from the Plasma chain to the mainchain.
Initial DEX Offering (IDO) – The New Way of Funding
The topic of IDOs has become wildly popular for raising capital among the crypto ecosystem. While ICOs were hot in 2017, an IDO is the new fundraising model of 2020 and forward.
Instead of selling tokens for a fixed price, an IDO allows investors to start trading the new token immediately. On top of that, liquidity pools make sure that the new token benefits from immediate liquidity. Many ICO projects have failed because there wasn’t sufficient liquidity causing extreme price slippage, negatively affecting the token price and often the project’s long-term survival.
Moreover, an IDO model allows for more fair fundraising because anyone can jump in and buy the token. No pre-sale that offers early investors a better price for their token. For that reason, investors enjoy the IDO model.
Turing Complete – What does it have to do with smart contracts?
A Turing Complete machine can run any program and solve any kind of problem given infinite resources and time. But why does this matter for blockchain technology?
In the early blockchain days, Ethereum branded itself as Turing Complete while Bitcoin is not. A computer needs to know if it can complete a specific action before executing it because applications run on multiple computers, and there’s no way to take them down. If we create an application that gets stuck in an infinite loop, the whole Ethereum network becomes useless because it consumes all resources. Therefore, by only allowing predictable – Turing Complete – smart contracts, we can predict the outcomes of contracts which is vital for the security of the Ethereum Virtual Machine.
Blockchain developers can use the Solidity programming language to develop Turing Complete smart contracts compatible with the Ethereum blockchain.
From the article of Michiel Mulders
Top Blockchain Trends in 2022
Top Blockchain Trends in 2022
Last year was a very fruitful year for the crypto community and cryptocurrency as a whole. Many things changed in the crypto world and many new users started to use cryptocurrency. It also gave a very good return to the investors of cryptocurrency.
But these are the things which happened last year. Now the question is what is the way forward in the coming year.
As we all know Cryptocurrency and the technology revolving around it i.e, blockchain builds trust and security for its user in the online world. And it is not only being used as Just the transaction thing but is also used in solving the many other problems around the world. It doesn’t have any border and you can say that it democratizes the thing in a very efficient and secure manner.
Now a day blockchains are used in multiple ways or say at multiple places, so the cryptocurrency in Smart Contracts, logistics, Supply Chain Origin, Security, Protection against identity theft, etc. We can say that blockchain can be used anywhere where there is a need for security and integrity where the database is accessed by multiple people. According to techJury during 2022, worldwide spending on blockchain solutions will reach $11.7 billion. Here I am going to discuss some of the trends that can happen in 2022 which will change the lives of many.
1. More countries may adopt Bitcoin and national cryptocurrencies:
In the year 2021 El Salvador became the first country to adopt bitcoin as legal tender means it can be used across the country to buy different types of goods and can also be used to pay salary.
According to many qualified persons in the bitcoin world says during 2022 a number of countries will adopt bitcoin and related cryptocurrencies as legal tender.
2. IOT integration with Blockchain:
As we all know blockchains ledgers are automated, encrypted, and are immutable. And this thing is necessary for IoT for its security and scalability. Blockchain could even be used for machine-to-machine transactions – enabling micropayments to be made via cryptocurrencies when one machine or network needs to procure services from another.
So in 2022, there might be pilot projects going on for IoT integration with blockchain.
3.Only 0.71% of the world's population use blockchain technology:
According to Edureka
Blockchain adoption statistics show that only 0.71% percent of the human population is currently using blockchain technology, or somewhere around 65 million people. According to even the most conservative estimates, this number is expected to quadruple in 5 years, and in 10 years, 80% of the population will be involved with blockchain technology in some form. That is millions of people will be added during the year 2022.
4. NFT is expanding beyond online Art:
We all have heard about Non-Fungible Token(NFT) and its growth during 2021. The prices of different NFT tokens sky-rocketed and NFT’s of different things came into existence. Many artists, actors from different industries, musicians jumped into the NFT world. Along with that, the metaverse thing by Facebook, Microsoft, and Nvidia created another level of hype around it in the market. So there will be plenty of opportunity during 2022 in the innovative use case of NFT.
And there will be many more things that will happen during 2022 in blockchain and the cryptocurrency world. Blockchain can be used in vaccine manufacturing and tracking, Supply chain and logistics management, and many more.
What is Rust Programming Language
What is Rust Programming Language?
What exactly is rust, and why is it so popular nowadays? You may have come across this question if you’re new to the world of computing. While Python and Java are still the most popular programming languages, Rust is quickly gaining traction. In this article, you will understand why Rust is so important. And finally how and where to start learning it.
About Rust
Rust is a popular programming language that was developed by Graydon Hoare at Mozilla Research with support from the community. It’s a statically typed, multi-paradigm, powerful, all-purpose programming language that was created to ensure superior performance and safety, and it works quickly and precisely. Rust is a programming language with a syntax comparable to C++ but no garbage collection.
It’s important to remember that Rust provides no-cost abstractions, generics, and functional features, which eliminates the majority of the problems that low-level language programmers experience. As a result, Rust is used to build a wide range of websites and applications, including Dropbox, Figma, NPM, Mozilla, Coursera, Atlassian, and many others. Additionally, Microsoft’s use of Rust for dependable and safety-critical development tools has bolstered the language’s reputation.
Why is Rust so popular?
1. Rust solves Memory Management Issues.
In-system programming frequently requires low-level memory management, and with C’s manual memory management, this task may be a real pain.
Rust has an amazing capacity to deliver convenience in the smallest aspects. It has full access to hardware and storage because it doesn’t need a garbage collector to run in the background.
This means that creating low-level code in Rust feels like programming a microcontroller. You have complete control over code updates without compromising memory safety.
2. Rust is excellent for embedded programming because of its low overhead.
Limited resources are common in embedded systems, which are commonly found in machines and home appliances. This is why low-overhead programming languages like Rust are necessary for embedded systems.
Rust is a resource-efficient and in-demand ingredient in embedded devices. It allows programmers to spot faults early on, preventing device failures.
The ability of Rust to generate zero-cost abstractions is the cherry on top. Rust is adaptable enough to accommodate any code abstraction you choose to use. You can use loops, closures, or whatever flavour of code you choose that day, and they’ll all compile to the same assembly without affecting your work’s performance.
3. Rust Makes It Easier to Create Powerful Web Applications
When it comes to choosing the right technology stack for web app development, the programming language is critical. There are a number of strong reasons to use Rust programming in your web app architecture.
If you’re used to creating web applications in high-level languages like Java or Python, you’ll love working with Rust. You may rest assured that code written in Rust will be error-free.
Anyone who knows C will find Rust to be a breeze to pick up. Furthermore, you don’t have to spend years learning the ropes before you can start dabbling with Rust.
Some of the key advantages of utilizing Rust for web development are as follows:
Rust can be compiled to WebAssembly, which makes achieving near-native web performance much easier.
Any language can compile to WebAssembly in Rust, allowing for portable, online-executable code.
4. Rust’s static typing makes it simple to maintain.
Rust is a language that is statically typed. All types are known at compile time when programming in Rust. Rust is indeed a strongly typed language, which makes it more difficult to develop the wrong programmes.
Successful programming relies on the ability to manage complexity. The complexity of the code increases as it expands. By allowing users to keep track of what’s going on within the code, statically typed languages provide a high level of simplification.
Rust also encourages long-term manageability by not requiring you to repeat the type of variable many times.
5. Rust is a high-performance material.
Rust’s performance is comparable to C++, and it easily outperforms languages like Python.
Rust’s rapid speeds are due to the lack of garbage collection. Unlike many other languages, Rust does not have runtime checking, and the compiler catches any incorrect code right away. This stops erroneous code from spreading throughout the system and wreaking havoc.
Furthermore, as previously said, Rust is lightning fast on embedded platforms as well.
6. Development and Support for Multiple Platforms
Rust is unique in that it allows you to program both the front-end and back-end of an application. The existence of Rust web frameworks such as Rocket, Nickel, and Actix simplifies Rust development.
Open Rustup, a rapid toolchain installer, and version management tool, and follow the instructions to develop with Rust. You can format the code in any way you want. Rustfmt automates the formatting of code using the standard formatting styles.
7. Ownership
Unlike many other languages, Rust has an ownership mechanism to manage memory that isn’t being used while the programme is running. It contains a collection of rules which the compiler verifies. In Rust, each value has a variable known as its holder. At any given time, there could only be one owner. When a variable passes out of scope, ownership is relinquished, which implies clearing the memory allocated to a heap when the variable could no longer be used. Unlike some other languages, the ownership laws offer benefits such as memory safety and finer memory management.
Implementation of an Ethereum RESTful API
Introduction
In this article, we will be going to implement an Ethereum RESTful API (Application Programming Interface). We will also be going to overview the technology stack for this task.
This article is for people who would like to learn how to implement a simple, Ethereum based API. Note that the project is fully Back-End, i.e. it doesn’t have any UI except third-party tools that we will be going to overview.
What is a RESTful API?
A RESTful (or REST) API is an application programming interface that has constraints of REST architectural style. REST stands for Representational State Transfer.
API is a set of definitions for building an application.
REST is a set of architectural constraints.
Note: Later in this article, we will see how we can combine REST and MVC (Model-View-Controller), which are totally different concepts, but which can coexist.
Model-View-Controller Design Pattern
Model-View-Controller (MVC) is a design pattern that divides program logic into three elements:
- Model. Model is the main component of the pattern. It is the dynamic data structure of an application, and it is independent of the user interface. It manages the data and the logic of the application.
- View. View can be seen as the front-end for an application.
- Controller. Controller accepts the input and converts it to commands for the model or view.
Task
Implement an Ethereum based RESTful API, with the following functionalities:
- Ethereum address creation.
- Storing the address in a database.
- Deposit assets to the stored address.
- Assets withdrawal from the stored address.
Technology Stack and Tools
- ExpressJS
- MongoDB
- MongoDB Compass UI
- Ganache
- Infura
- Postman
Prerequisites
Before using this API, it is needed to have installed:
- NodeJS
- MongoDB (version 3.2 is used)
- MongoDB Compass UI (recommended UI for MongoDB)
- Ganache
- Postman
It is also needed to create an Infura account and a project. Project ID and mnemonic will be used in the .env
file.
Note: You can copy-paste a mnemonic from Ganache, or generate it via the following command:
npx mnemonics
After installation of NodeJS, it is needed to install Node modules in the project folder. When everything is ready:
- Start the MongoDB server.
- Start MongoDB Compass UI
- Enter the following connection string:
mongodb://127.0.0.1/account
- Start the API server using the following command:
npm run build && node ./dist
Note: API server port is 4030.
Functionalities
- Account Creation
- Deposit
- Withdrawal
Account Creation
Account (Ethereum address) can be created with the POST request in Postman API, as follows:
POST http://localhost:4030/account/createAccount
Ethereum address and a private key are displayed in the command line.
Ethereum address is stored in MongoDB.
Note: You should copy-paste the private key somewhere, for deposit/withdrawal usage. Private keys are shown with the 0x
prefix, and you should ignore that prefix.
Deposit
It is possible to use one of the Ganache accounts that already have some assets as a sender.
While the API server is running, go inside the src/eth-controller.js
file, and in the eth_deposit
function, insert an address and a private key of your Ganache account. In the same function, for the receiver parameter, insert your newly created Ethereum address.
Deposit from Ganache account to a newly created address is possible via Postman API, as follows:
POST http://localhost:4030/account/deposit
Withdrawal
Here, one of the Ganache accounts is used as a receiver.
While the API server is running, go inside the src/eth-controller.js
file, and in the eth_withdraw function
, insert an address and a private key of your newly created account. In the same function, for the receiver parameter, insert one of the Ganache accounts.
Withdrawal from a newly created account to a Ganache account is possible via Postman API, as follows:
POST http://localhost:4030/account/withdraw
Technology Stack and Tools Overview
Before we begin with the implementation, we will overview a technology stack for our project.
NodeJS
NodeJS is a server-side JavaScript platform. It runs as a stand-alone application, rather than browser-only JS. Node.JS strives for non-blocking and asynchronous programming. The asynchronous mode can be dangerous. There is a term callback hell, which is a big issue caused by coding with complex nested callbacks.
It is a popular platform for building RESTful APIs, so it is somehow natural to use it in our project.
ExpressJS
Express is a web framework for NodeJS. It can be seen as a layer built on the top of the NodeJS, that helps manage a server and routes.
It has a Model-View-Controller (MVC) structure, which makes it suitable for usage.
MongoDB
MongoDB is a document-oriented database. MongoDB and other NoSQL databases have applications in Blockchain, so we are going to use MongoDB for our project, rather than some Relational Database Management System (RDBMS).
MongoDB Compass
MongoDB Compass is the official GUI for MongoDB. It is an alternative to Mongo Shell. There are also other GUIs that can be used with MongoDB.
Ganache
Ganache has two components:
- Ganache CLI (Command-Line Interface). Ganache CLI allows running a local ethereum blockchain locally. This blockchain is not connected to any public testnet, nor the mainnet.
- Ganache GUI (Graphical-User Interface). Ganache GUI is a graphical interface for Ganache CLI. It can run as a stand-alone desktop app.
Infura
Infura is an API that provides the tools which allow blockchain applications to be taken from testing to deployment, with access to Ethereum and IPFS.
These are a few examples of possible problems that can be solved by Infura:
Long initialization time. It can take a really long time to sync a node with the Ethereum blockchain.
Cost. It can get expensive to store the full Ethereum blockchain.
Infura solves these problems by requiring no syncing and no complex set-ups.
Note that there are also, other service providers like Infura.
Postman
Postman is an API for building and using APIs. It has the ability to make various types of HTTP requests and save environments for later use
Here, we are going to test HTTP POST requests.
Directory Structure
Since we have a small number of files in this project, we will keep a simple directory structure. Note that you should make additional folders for MVC design pattern (Model, View, and Controller folders).
Here is the directory structure:
project
└───.env
└───package.json
└───dbconn.js
└───model.js
└───eth-controller.js
└───controller.js
└───route.js
From this directory structure, we can see how REST can be combined with MVC. File routes.js
represents a REST module in our API.
Implementation
Now that we have seen an overview of this API and an overview of used technologies, we can start with the implementation.
We will start with defining a package.json
file:
{
"name": "eth-api",
"version": "1.0.0",
"description": "",
"main": "dbconn.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "ISC",
"dependencies": {
"body-parser": "^1.18.3",
"express": "^4.16.3",
"mongodb": "^3.1.4",
"mongoose": "^5.2.14",
"web3": "^1.0.0-beta.36",
"dotenv": "^8.2.0",
"nodemon": "^2.0.7"
}
}
Next, we will define a dbconn.js
file, which is a server for this API:
const express = require('express');
const bodyParser = require('body-parser');
const mongoose = require('mongoose');
require('./model.js');
const account = require('./route.js');
const app = express();
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({extended: true}));
let port = 4030;
mongoose.Promise = global.Promise;
mongoose.connect('mongodb://127.0.0.1/account', { useNewUrlParser: true });
let db = mongoose.connection;
db.on('error', console.error.bind(console, 'MongoDB connection error:'));
let accountDB = db.collection("accounts");
app.use('/account', account);
app.listen(port, () => {
console.log('Server is up and running on port number ' + port);
console.log(
accountDB != null ?
accountDB.name + " database found" :
accountDB.name + " database not found"
);
});
We will proceed with Model-View-Controller (MVC) design pattern. Note that, since we are building a Back-End API, there is no View (user-interface).
Model
Here is a model.js
file:
const mongoose = require('mongoose');
const Schema = mongoose.Schema;
let AccountSchema = new Schema({
ETH: {type: String, required: false, max: 64},
});
module.exports = mongoose.model('Account', AccountSchema);
Controller
We will divide the Controller module into two parts. The goal of this is to wrap the Web3 “magic” in one file and then call those functions in basic controller logic.
Here is an eth-controller.js
file, which contains Web3 “magic”:
const Web3 = require ('web3');
const Accounts = require('web3-eth-accounts');
const accounts = new Accounts('ws://127.0.0.1:4030');
const web3 = new Web3(new Web3.providers.HttpProvider(`https://rinkeby.infura.io/v3/${process.env.INFURA_PROJECT_ID}`));
exports.get_new_address = async function (req,res) {
let ethData = {};
try {
ethData = await web3.eth.accounts.create();
console.table(ethData);
ethData.result = ethData.address;
return ethData.result;
} catch(err) {
ethData.result = err.message
console.log ( chalk.red ( "REQUEST ERROR" ) );
return ethData.result;
}
console.log(ethData.result);
return ethData.result;
}
exports.eth_deposit = async function(req, res) {
const web3_2= new Web3('http://127.0.0.1:7545');
//Insert other address and private key of a local Ganache account
const address = '';
const privateKey = '';
//Insert other, newly created address
const receiver = '';
console.log('Sending a transaction ...');
const createTransaction = await web3_2.eth.accounts.signTransaction({
from: address,
to: receiver,
value: web3_2.utils.toWei('2', 'ether'),
gas: 21000,
},
privateKey
);
const receipt = await web3_2.eth.sendSignedTransaction(createTransaction.rawTransaction);
console.log('Transaction successful');
}
exports.eth_withdraw = async function(req, res) {
const web3_3= new Web3('http://127.0.0.1:7545');
//Insert other address and private key of a newly created account
const address = '';
const privateKey = '';
//Insert other address from a local Ganache account
const receiver = '';
console.log('Sending a transaction ...');
const createTransaction = await web3_3.eth.accounts.signTransaction({
from: address,
to: receiver,
value: web3.utils.toWei('1', 'ether'),
gas: 21000,
},
privateKey
);
const receipt = await web3_3.eth.sendSignedTransaction(createTransaction.rawTransaction);
console.log('Transaction successful');
}
Here is a controller.js
file, which calls eth-controller.js
modules:
const mongoose = require ('mongoose');
const Account = mongoose.model ('Account');
const ethereum_controller = require('./eth-controller.js');
const express = require('express');
exports.new_account = async function (req, res) {
let ethData;
let newAccount = new Account (
{
ETH: req.body.ETH,
}
);
ethData = await ethereum_controller.get_new_address();
newAccount.ETH = ethData;
newAccount.save ( function ( err, dbResponse ) {
if ( err ) {
res.send( err );
}
console.log ( "***" + ( dbResponse ) + "***" );
res.send ( dbResponse );
});
}
exports.deposit = async function(req, res) {
let ethData;
ethData = await ethereum_controller.eth_deposit();
}
exports.withdraw = async function(req, res) {
let ethData;
ethData = await ethereum_controller.eth_withdraw();
}
Router
Lastly, we will define a router file, which makes HTTP POST requests possible (route.js
):
const express = require('express');
const account_controller = require('./controller.js');
const router = express.Router();
router.post('/createAccount', account_controller.new_account);
router.post('/deposit', account_controller.deposit);
router.post('/withdraw', account_controller.withdraw);
module.exports = router;
Conclusion
In this article, we have:
- Learned about RESTful API.
- Learned about MVC design pattern.
- Defined a task.
- Defined technology stack and tools.
- Seen the prerequisites for using our API.
- Seen the functionalities of our API.
- Overviewed the technology stack and tools.
- Defined the directory structure for our project.
- Implemented our API.
We have seen how to implement a basic Ethereum based RESTful API. There are some bad practices in this implementation, e.g. hard-coded strings. This is just a demonstration for local use, but note that you should avoid any type of bad practice.
From the article of Nemanja Grubor
Empowering the Future of Freelancing: Blockchain’s Disruptive Potential
Empowering the future of freelancing: Blockchain has disruptive potential.
Freelancing has undergone a remarkable evolution, offering individuals worldwide the freedom to work on their terms and providing businesses with access to diverse talent pools. However, traditional freelance platforms have limitations, such as high fees and centralized control. Enter blockchain technology—an innovative and decentralized system poised to revolutionize the freelancing industry. In this blog, we explore the intersection of freelancing and blockchain, delving into the transformative benefits, present challenges, and exciting future prospects this amalgamation holds.
I. Understanding Blockchain Technology: To fully comprehend the potential impact of blockchain on freelancing, it’s crucial to grasp its core principles. Blockchain represents a distributed ledger that securely records transactions across multiple computers. Its inherent features include decentralization, immutability, transparency, and cryptographic security.
II. Unleashing the Power of Decentralized Freelancing: By eliminating intermediaries, blockchain-based platforms disrupt the traditional freelance paradigm, fostering direct peer-to-peer interactions. This decentralization yields substantial benefits. Firstly, it slashes fees, enabling freelancers to retain a fairer share of their earnings. Secondly, it instills trust through smart contracts, automating agreements, and safeguarding funds in escrow until job completion. Thirdly, it unlocks a global talent pool, granting clients access to skilled professionals across borders.
III. Empowering Freelancers with Blockchain: Freelancers stand to gain a multitude of advantages by embracing blockchain-based platforms. Firstly, they benefit from reduced fees and expedited payment settlements due to the removal of intermediaries. Secondly, blockchain offers a transparent and immutable work history, establishing a reputation system that enhances credibility. This augmented reputation leads to increased job opportunities and improved compensation. Moreover, smart contracts protect freelancers against non-payment or client disputes, as funds remain secure in escrow until contractual obligations are fulfilled.
IV. Advantages for Clients in the Blockchain Era: Clients also experience significant advantages by embracing blockchain-based freelancing platforms. They gain unparalleled access to a diverse talent pool, unrestricted by geographical boundaries. This wider talent pool enables businesses to locate specialized skills that may be scarce in local markets. Additionally, smart contracts ensure that clients only release payment upon satisfactory completion of work, bolstering security and guaranteeing high-quality deliverables.
V. Overcoming Challenges and Envisioning the Future: While the potential of blockchain in freelancing is immense, challenges persist. Widening adoption and raising awareness remain key obstacles, as many freelancers and clients are yet to embrace blockchain technology. Scalability and transaction speed are also vital aspects that demand attention for widespread acceptance. However, these challenges are expected to be surmounted as blockchain technology advances and matures.
Looking ahead, the future of blockchain in freelancing is auspicious. The ability to execute secure, efficient, and trustless transactions without intermediaries has the potential to reshape the industry. As more blockchain-based platforms emerge and gain momentum, freelancers and clients will continue to benefit from enhanced transparency, reduced costs, and an inclusive global marketplace.
Blockchain technology possesses the transformative power to reshape the freelancing landscape, addressing long-standing challenges while introducing groundbreaking possibilities. Decentralized platforms offer advantages such as lower fees, increased trust, and access to a global talent pool. Freelancers can relish fairer compensation, fortified reputation systems, and enhanced security, while clients gain access to specialized skills and secure, superior deliverables. Despite existing hurdles, the future outlook for blockchain in freelancing is exceedingly promising. As awareness grows, technological advancements overcome scalability obstacles, and adoption becomes widespread, we can anticipate an era of efficient, transparent, and inclusive freelancing ecosystems empowered by blockchain.
What is GameFi?
What is GameFi
GameFi is a word made up of two things one is Video Game and another is Decentralized Finance(DeFi). These video Game uses Blockchain Technology by which player who plays games on them are the sole and verified owner of the virtual elements of the game.
The traditional model of the video game is “Pay-to-win” in which the player must pay to gain an advantage such as upgrading, trading digital assets, etc.
GameFi, On the other hand, is “Play-to-earn” where through quests, trading, or other mechanisms, GameFi allows gamers to earn digital assets for their in-game efforts.
So, now you have understood the basic difference between traditional Gaming and GameFi. But the main difference other than the above is in traditional gaming players could lose their investment at any time if the publisher shut the game down or went out of business. GameFi games, on the other hand, keep their assets stored on a distributed network. These operate independently of any single organization, substantially derisking the digital assets.
How to start playing a GameFi game ?
To start playing a GameFi game you will have to do the below-given things:
- A Crypto Wallet
- Fund Your Wallet
- Buy the basic digital assets to play
Crypto Wallet: You will need a crypto wallet to hold crypto and NFTs. But as currently there is no cross-compatibility so you will have to select the wallet according to the platform you are playing On. As the blockchain game Axie Infinity, for instance, was built using the Ethereum protocol, so you’ll need an Ethereum-compatible wallet, like MetaMask, to play.
Fund Your Wallet: Some of the platforms need you to buy their cryptocurrency before you can start playing on their platform. So you will need to fund your wallet so that you can start playing.
Buy the basic digital assets to play: In most GameFi games, in order to generate profits, you need to do so through your avatar or similar digital assets. This means that before playing you will need to buy them. For example, Axie Infinity requires its players to have three Axies in their wallets to start playing.
Some of the GameFi Platforms are as:
Decentraland: It is a virtual world run by its users. Every piece of land and every element in the virtual earth is an NFT. In early 2021, Decentraland had an average of 1,500 daily active users. In March, it reached more than 10,000.
Axie Infinity: It is an NFT-based online video game developed by the Vietnamese studio Sky Mavis, which uses the cryptocurrency AXS and SLP based on Ethereum. Axie Infinity has rapidly become one of the biggest blockchain games in the world. Players collect, train, and battle creatures — NFTs called Axies — to progress through the game.
The Sandbox: It is a virtual Metaverse where players can play, build, own, and monetize their virtual experiences. SAND, the native token of The Sandbox, is used across the Sandbox ecosystem as the basis for all kinds of interactions and transactions in the game.
Forest Knight: It is one of the first mobile GameFi games, Forest Knight is a turn-based fantasy RPG that hooks its users by giving them the chance to earn rare NFTs while they beat back the forces of evil. The game currently has three NFT item types — weapons, accessories, and skins — but the publishers will soon introduce numerous new types — like pets and property — to deepen the game’s economy and trading experience.