Innovative Ways Companies Are Using Blockchain

Blockchain technology has a number of applications, hence businesses are using it to resolve specific problems in their fields.

Blockchain technology is often associated with cryptocurrency — but it doesn’t have to be. The revolutionary technology that powers Bitcoin and other virtual currencies has a number of other applications, which is why more and more businesses are using it to resolve specific problems in their fields.

The technology makes it easy to share information securely, speed up transactions, and store data. As more and more industries become data-driven, there are tremendous implications for any technology that makes it easier to share data. That’s why it’s a good idea to ask yourself what the blockchain can do for your business.

What is blockchain?

‌In some ways, blockchain is comparable to a gigantic database. It’s often referred to as a ledger because it’s commonly used to keep records of transactions. However, the technology can be used to record any blocks of data, from medical information to financial records. Blockchains are immutable, which means that once data is stored, it can’t be deleted or altered. And because the blockchain is based on encryption, it’s difficult for hackers to target, making it a highly secure option for storing important data.

Using blockchain technology as a business tool

‌Blockchain technology can build automation into just about any procedure that revolves around data sharing and verification. The Ethereum blockchain is already being widely used for smart contracts, which automate payment for a product or service as soon as the terms of a contract have been met. Now, businesses are using blockchain to help make payments, verify identities, and transfer documents.

What are some uses of the blockchain?


‌Underwriters have begun using blockchain to speed up the process of assessing customer risk and determining coverage.

Risk assessment is at the heart of what insurance companies do. Deciding on the level of risk that a customer poses is what allows insurance providers to cover as wide a group of customers as possible. But to correctly calibrate risk levels, underwriters need to have access to a very broad range of data. This includes medical data on the customer, family history, and information about the customer’s lifestyle and behaviors.

Using traditional methods, this process can be very cumbersome and time-consuming. It takes time to obtain data and the requisite permission to access that data. Blockchain, however, can dramatically speed up the process of getting access to data, thereby speeding up the whole underwriting process.

Blockchain can also bring greater transparency to the process and improve trust, since all parties that have access to the blockchain have eyes into when and how data is accessed.

Cross-border payments

‌Transferring money across borders has always been a complex undertaking. Putting funds into the right accounts in the right country at the right time, so that payments can flow from one currency to another, is a major operation. Fluctuating exchange rates and the need to simultaneously update banking records in several countries just add more pieces to an already difficult puzzle.

Some global payment solutions firms have begun using the blockchain to expedite the process of transferring money. The blockchain can serve as a bridge between two currencies so that payments are seamlessly paid out and received in the correct currency, right away. As our world grows ever more globalized and interdependent, it seems likely that there will be an ever greater demand for such technology.

Know Your Customer

‌Many in the insurance and banking fields are subject to Know Your Customer (KYC) regulations that require them to collect, validate, and verify documents so they can be certain of their customers’ identities. In the past, this has been a tedious and time-consuming procedure, and it can be alienating to customers. Just as importantly, the need to carry out such extensive due diligence imposes a real burden on institutions, both in terms of time and financial cost.

Enter the blockchain. With a blockchain network, documentation can be shared instantly with everyone who has access to the network. Because the blockchain is immutable, there are no concerns about documents being tampered with. The whole process of identity verification immediately becomes far speedier and smoother.

Final thoughts

‌Navigating through the uses of any new technology can be challenging. But the challenge is ratcheted up a notch when the new technology is something like the blockchain, which tends to be both widely discussed and poorly understood. There are endless rumors and theories in circulation about the blockchain. It can be hard to sift out the truth from the fiction.

That’s why it’s so important to work with a professional organization that truly understands the field. Skeps is a technology layer that connects merchants/sellers and their customers with the financing services they need. Skeps has experience enabling businesses to use the blockchain to improve their services. 

Contact us today or request a demo to learn more about what blockchain technology can do for your business.

Reconnecting To IPFS Pubsub After Connectivity Issues

There are multiple ways to communicate between different nodes. Read on to know what we use to transfer information between different nodes.

Being a decentralized platform, we usually need a way to communicate between different nodes. There are multiple ways to do the same. We use blockchain events for transferring sensitive information between nodes. But for all the other information, we can use ipfs pub/sub which, is based on publisher-subscriber pattern often used to handle events in large-scale networks. Being a relatively new feature in IPFS, IPFS pubsub subscription reconnects are not officially supported. In this article, we will learn to handle the IPFS pubsub subscription disconnect in the production environment. The version of ipfs used is 0.4.16.

Let’s see how we can gracefully handle ipfs subscribe disconnects in Nodejs.
We are using the nodeJS HTTP package for making the HTTP calls as ipfs provides us an HTTP endpoint to listen to the subscribed topic. Calling an HTTP get request to provide us a response stream that can trigger multiple callbacks. The stream provides us with the following events.

  1. data :- provides the data coming from the stream
  2. error :- provides error
  3. end :- called whenever the connection is terminated

We are storing each request in a subscription object by generating a random id as its key.

File subscriptions.js

In this file we will handle the ipfs subscriptions.

This file returns a subscribe method which can be used anywhere in the project to subscribe on ipfs topic.

const subscriptions = {​}​;
module.exports.subscribe = async (topic, callback, id = null) => {​
    if (id == null) id = new Date().getTime() * 100 + Math.floor(Math.random() * 99);
    const reqObj = http.get('http://localhost:9096/pubsub/sub?arg=${​topic}​&discover=false, async (res) => {​
        res.on("data", async (msg) => {​
            try {​
                const str = Buffer.from(msg, 'base64').toString('utf-8');
                const strJSON = JSON.parse(str);
                const data =;
            }​ catch (err) {​
        res.on("end", async () => {​
            if ("isCanceled" in subscriptions[id] && subscriptions[id].isCanceled == true) {​
                delete subscriptions[id];
            await cancel(id);
            setTimeout(async () => {​
                try {​ await subscribe(topic, callback, id); }​ catch (_) {​ }​
            }​, 10 * 1000);
        res.on("error", async (err) => {​
            console.log("unable to add subscription on " + topic);
            setTimeout(async () => {​
                try {​ await subscribe(topic, callback, id); }​ catch (_) {​ }​
            }​, 10 * 1000);
    reqObj.topic = topic;
    subscriptions[id] = reqObj;
    console.log("subscription added on " + topic);
    return id;
const cancel = module.exports.cancel = async (id) => {​
    if (id in subscriptions) {​
        if ("abort" in subscriptions[id]) try {​
            subscriptions[id].isCanceled = true;
        }​ catch (_) {​ }​
        console.log("subscription removed on " + (subscriptions[id].topic || id));

The events we are more concerned about are Error and End.

Let us discuss the ‘error’ event first. This event is called whenever there is some error while making the HTTP call, and in this case, we only need to retry for the ipfs subscription HTTP call. In this case, we have added some delay using setTimeout and called the method recursively also passed the previous id so that the subscription object gets overwrite.

Let us discuss the ‘end’ event now. The end event is called when the request is terminated – there is no more data to be sent in the response. In this case, we also need to subscribe again on the same topic as we did in the ‘error’ event. But in this case, we also need to abort the previous stream explicitly. Suppose, if there is some data to be received in response, it should not come more than once.

Let us discuss the ‘data’ event now. The data received from ipfs is base64 encoded. After decrypting, we are parsing it as JSON as we follow a policy that we will always send JSON data in ipfs pubsub. However, the same is not mandatory. After this, the callback is called with the JSON object.

The subscription object can also be saved in a database and re-initialized upon a service restart. For our case, we’re simply storing the subscriptions in memory.

Making Finance Smarter With Smart Contracts

Smart Contracts are up-and-coming technology that can revolutionize our financial landscape, and leaders are cautioned to understand it before jumping.

Smart Contracts are up-and-coming technology that can revolutionize our financial landscape, and leaders are cautioned to understand it before jumping.

Before understanding Smart Contracts, you need to know what blockchain is. A blockchain is a form of a distributed immutable ledger. Distributed means it can be hosted on several servers to make it highly available. Immutable means that one cannot edit the data written in the ledger book. An entry is required to the blockchain to make any change. It brings versioning of data by itself. Everyone who is part of the blockchain network has access to all history of changes in the data.

Now, what is a smart contract?

A smart contract is a program that is stored on the blockchain which executes actions when a specific event occurs. This event can have an impact on backend calculations. Smart Contracts contain terms and conditions agreed upon by the participants to render services, transfer assets, and record transactions. They are designed to be run by the participants without the need for intermediaries.

What are the current issues being faced by the finance industry?

Due diligence and bookkeeping take most of the resources in finance. Since money is at stake, every penny should add up, and every piece of information should be correct. Reconciliation becomes challenging when these details are dispersed in multiple systems. Inter entity communication requires even more verifications to prevent fraud. Any information shared by external entities is subject to verification, which adds to extra costs for a company. Taking swift action on new information involves human efforts. For example, the transfer of deeds of the property after successful loan disbursement still requires multiple intermediaries.

In the underwriting industry, underwriters are a line of defense between business and loss. Each underwriter has to go through multiple documents and pieces of information. It is a time-consuming process, and when coupled with increasing workloads, standards degrade.

In the loan industry, all the loans in the mortgage pools are verified with actual documentation. These audits are very costly, ranging up to 100$ each and making it cost-prohibitive for smaller amounts of loan pools. Although these pools are sampled, only less than 1% of the loans are verified, introducing uncertainty in diligence results. What this means for finance is, nobody can tamper with the data stored on blockchains. If anyone tries to tamper with the data, everyone in the network will know about it, bringing transparency and reducing multiple verifications at every step.

How do Smart Contracts fit into the financial landscape?

Since Smart Contracts are short programs that execute when any condition changes in the blockchain, these can serve multiple purposes. For example, in an insurance company, the data is stored on a blockchain, and after every five years, the company changes the insurance premium. In such a case, Smart Contracts use the date of birth stored on the network.

What are the uses of Smart Contracts?

Since data is already stored in an immutable ledger, Smart Contracts can be run on it to ensure 100% diligence to the underwriting process. Once data is verified, its results can itself be stored on the blockchain to prevent future revalidations. Having loan information on the blockchain ensures that Smart Contracts can be used to outreach downgraded loan borrowers in time before the loan becomes a non-performing asset.

Smart Contracts can gather documents from blockchain and submit them for claims immediately upon a preconfigured triggering event leading to faster and consistent payouts. Submitting loan documents on blockchain enhances security, transparency and lowers fraud risk. Though they may seem words to us, these are costly operations in finance.

An independent 3rd party can audit the code of Smart Contracts to verify the integrity of the agreement. It also maintains data integrity because of the immutable nature of the blockchain. These can also be used to compile monthly surveillance performance reports.

The nots of Smart Contracts

Smart Contracts’ legal validity is enforced in 47 states of the United States. However, the United kingdom legislature is still researching to understand the Smart Contracts landscape. Similarly, many countries are still working on legal frameworks around blockchain technology. Like any other technology, Smart Contracts are made by humans and prone to bugs. It can jeopardize large amounts of money when stakes are high. Even though blockchain is distributed in nature, it is still susceptible to scaling challenges – meaning, as the amount of data and nodes increase in the blockchain, the execution speed of Smart Contracts decreases.


Smart Contracts are up-and-coming technology that can revolutionize our financial landscape. Amongst all the blockchain hype, leaders are cautioned to understand its pros and cons before jumping.

If all you have is a hammer, everything looks like a nail!

Logging In Microservices – Best Practices

Learn some best practices to follow while logging microservices .

In this article, we will examine some best practices to follow while logging microservices and the architecture to handle distributed logging in the microservices world.

Microservices architecture has become one of the most popular choices for large scale applications in the world of Software Design, Development, and Architecture, basically due to the benefits over its traditional counterpart, the monolithic architecture. These benefits arise due to a shift from one single large, tightly coupled unit (monolith) to multiple small loosely coupled services, wherein each service has limited and specific functionality to deliver. So, with smaller codebases, we get to leverage the power of distributed teams due to decreased dependencies and coupling, in turn reducing the time to market (or production) of any application. Other advantages include language-agnostic stack and selective scaling.

Logging in microservices comes with other advantages that can be shipped with the architecture, it also comes with its own set of complexities – the reason being a single request in this architecture could span across multiple services, and it might even travel back and forth. To trace the end-to-end flow and identify the source of originating errors of a request through our systems, we need a logging and monitoring system in place. We can adopt one of the two solutions, a centralized logging service and an individual logging service for each service.

Individual logging service VS Centralized logging service

Individual logging solutions for each service can become a pain point when the number of services starts growing. Because for every process-flow that you intend to look the logs for, you might need to go through each of the service logs involved in serving this process request, hence, making issue identification and resolution a tough job. However, on the other hand, in a centralized logging service, you have a single go-to place for the same, which, backed by enough information around logs and a thought-off design, can do wonders at achieving the same.

At Skeps, we use centralized logging solutions for our applications running on microservice architecture.

Centralized logging service

A single centralized logging service that aggregates logs from all the services should be a more preferred solution in a microservices architecture. In the software world, unique/unseen problems are not seldom, and we certainly do not want to be juggling through multiple log files or developed dashboards to get insights about what caused the same. While designing a standard centralized logging scheme, one could or in fact should refer to the following norms:

Using a Correlation Id for each request

A correlation id is a unique id that can be assigned to an incoming request, which can help us to identify this request uniquely in each service.

Defining a standard log structure

Defining the log structure is the most crucial part of logging effectively. In the first place, we need to identify why are we enabling logging? A few points could be:

  1. How did each service respond while delivering on its front – whether it succeeded or caused errors? Whatever the case may be, our aim should be to get most of the context around that.
  2. What process/function from the service generated the log?
  3. At what time during the process was the log generated.
  4. How crucial is the process that generated the log?

While answering these questions, we get to derive a format, which can include, but is not limited to the following things:

  1. Service name
  2. Correlation Id
  3. Log String (can include a short description, name of the generating method)
  4. Log Metadata (can include error stacks, successful execution response(s) of a subtask)
  5. Log Timestamp
  6. Log Level (DEBUG, WARN, INFO, ERROR)

Customized Alerts

When something critical breaks, we do not want it to get stored in our databases without us getting to know about it in real-time. So, it is pivotal to set up notifications on events that indicate a possible problem in the system, and its categorization can be done by keeping a reserved log level.

A set of rich querying APIs or Dashboard

Post storing all the information, it is important to make sense out of the stored information. Rich APIs can be developed to filter based on correlation id, log level, timestamp or any other parameter that can help to identify and resolve issues at a much faster pace.

Decide a Timeline to Clean Logs

Decide upon an appropriate timeline to clear the clutter and do cut down on your storage usage. This also depends on your application’s need and the reliability of your shipped code that has been in the past. This is when you are not storing any sensitive information. Otherwise, you need to follow the compliances in place. However, there are workarounds for that. You can keep an additional field to store such information, and only that can be cleared from each of the stored logs as per the compliance timeline.

How to make log aggregator fail-safe?

There can be instances when the logging service (log aggregator) goes down, and it becomes a single point of failure for our analysis and debugging needs. We do not want to miss out on any logs that were generated during that outage. So, we need to develop an alternate mechanism that stores those logs until our aggregator is back online.

Additional Pointers

  • It is advisable to generate logs asynchronously, to reduce the latencies of our process flows.
  • A pattern known as observability eases out the process of monitoring distributed environments. Observability includes application log aggregation, tracing, deployment logs, and metrics for getting a whole-some view of our system.
  • Tracing identifies the latencies of the involved processes/components/services and helps to identify bottlenecks. This allows us to make better decisions regarding the applications scaling needs.
  • Deployment Logs – Since microservice, we have multiple deployable services, we should also compile logs of the deployments that are made.
  • Metrics refers to the performance and health of the infrastructure on which our application/services rely on. An example of the same could be the current CPU, memory usage.

How Easy It Is For A Merchant To Integrate With Skeps?

Integration with Skeps is very simple. Read on to know how easy it is for a merchant to integrate with Skeps.

All you need to do is provide us with two docker-enabled servers, and our team will do the rest. Skeps also provides two business APIs that is integrated with your portal with which you can perform the below actions:

  1. Check Approval – To get loan offers for the customer.
  2. Share Lead – To share the selected offer by the customer and get the buyer’s URL where the customer can fill in the required details and get the loan.

Skeps provides you with a dashboard where you can monitor the below things:

  1. Evaluate your leads – You can test the offers you will get from the buyers for your customers by uploading the data of your customers in a CSV file.
  2. Manage Buyers – You can choose with which buyers you want to share your leads with, check the history of evaluated leads with the buyer and their status.
  3. Check Metrics – You can do an in-depth analysis of the leads in a given timeframe like the approval rate, average fico, shared rate, average shared loan, etc.
  4. Check History – You can get all the information related to a lead like current status, offers received, loan amount, the buyer with which the lead was shared, etc.

Also, the client is free to add any more APIs required for reconciliation purposes, as per need. Skeps can develop and integrate them for a specific client.

Securing P2P File Transfers

Learn how to secure P2P file transfers within clients using IPFS (InterPlanetary File System) and blockchain private messaging.

Learn how to secure P2P file transfers within clients using IPFS (InterPlanetary File System) and blockchain private messaging.

This article takes you through the process of securing P2P communication within clients using a medium such as IPFS (InterPlanetary File System) and blockchain private messaging. Let’s try to develop an understanding of how this works.

What are P2P file transfers?

Peer to Peer (or P2P) networks have distributed architecture at their core. A group of peers is working with each other instead of a centralized server. So, considering the specific scenario of data or file transfer in a P2P setup, each peer (or participant) can act both as the source of data and the consumer of data.

We primarily use IPFS as the underlying transport for connectivity and transfer of data between peers.

Now, as discussed, multiple peers have access to a file in a P2P setup – the question that arises is, will we be able to make a targeted file transfer to a particular peer securely without missing out on the advantages offered by this architecture?? The answer we get is yes, by using encryption techniques on our files.

The encryption technique that we use is PGP.

What is PGP encryption?

Pretty Good Privacy (PGP) is an encryption program that provides us a way to encrypt any sensitive information.  

PGP is a good encryption technique to go with primarily due to 2 reasons. First, it uses a mix of both symmetric and asymmetric encryption. One only needs to share their cryptographic publicKey (from their cryptographic public/private pair generated by respective peers) and keep their privateKey secure without incurring enormous costs in terms of speed, which generally comes in such publicKey encryption. Second, it is an implementation by design that allows us to encrypt the same information, if required, to be targeted to multiple clients or peers at a time. This implementation lets us share a single encrypted message to be broadcasted across the network, and only the targeted ones can decrypt and make sense out of that data.

PGP encryption in action

For demonstration purposes, we will be proceeding with IPFS upload and download cli calls. (Find more information here – <>)  

We have divided the process into the following steps:

  1. Generating public/private key pair

It can be generated using some information that uniquely identifies a client. It is a one-time process and is also a simple one:  

The publicKey generated is the one that can be shared openly with others. And the privateKey needs to be kept safe and not shared with anyone. This one is used to decrypt the messages.

       const openpgp = require('openpgp'); // Importing openpgp module.  
       (async () => {  
           const { privateKeyArmored, publicKeyArmored } = await openpgp.generateKey({  
                userIds: [{ name: 'Cristopher', email: '' }],  
               curve: 'ed25519', // ECC curve name  
               passphrase: 'verySecurePassword'                                         
           console.log(privateKeyArmored); // PGP privateKey  
           console.log(publicKeyArmored); // PGP publicKey  
  1. Generating an encrypted message for a client X and writing it to a file
   const openpgp = require('openpgp'); // Importing openpgp module.  
   const fs = require('fs');  
   (async () => {  
       const publicKeysArmored = [  
           `-----BEGIN PGP PUBLIC KEY BLOCK-----  
           -----END PGP PUBLIC KEY BLOCK-----`  
       ]; // An Array consisting of Target(s) PGP publicKey(s)   
       const data = "This data needs to be securely transported."; // Data to be encrypted.  
       let publicKeys = [];  
       for (let index = 0; index < publicKeysArmored.length; index += 1) {  
           [publicKeys[index]] = (await openpgp.key.readArmored(publicKeysArmored[index])).keys;  
       const { data: encrypted } = await openpgp.encrypt({ // getting the encrypted data.  
           message: openpgp.message.fromText(message),  
       const encryptedData = Buffer.from(encrypted).toString('base64'); // converting encrypted data to base64  
        fs.writeFile('encryptedFile.txt', encryptedData, 'base64' , (err)=> { // writing enrypted data to a file  
           if(err) console.log(err);  
            console.log('Encrypted file is ready.');  
  1. Uploading this file to ipfs
   // Considering an IPFS network is up and running at IPFS_URL.
            ipfs add encryptedFile.txt
   // This would return a hash say, QmWNj1pTSjbauDHpdyg5HQ26vYcNWnubg1JehmwAE9NnS9 for example.  

This hash can be sent over to the target nodes using blockchain private messaging.

  1. Downloading file from ipfs at client X’s end
      /* The hash obtained in step 2 has to used to download the file from ipfs. */    

        ipfs cat [[IPFS_HASH]] > encryptedMessage.txt"  
  1. Decrypting the message at client X’s end
   const openpgp = require('openpgp');  
   const fs = require('fs');  
   (async () => {  
       const privateKeyArmored = `-----BEGIN PGP PRIVATE KEY BLOCK-----  
       -----END PGP PRIVATE KEY BLOCK-----`; // PGP privateKey of client.   
       const passPhrase = `verySecurePassword`; // passPhrase with which the PGP privateKey of client is encrypted with.  
        fs.readFile('encryptedMessage.txt', 'base64', (err, data)=>{  
           if(err) console.log(err);  
           else {  
               const { keys: [privateKey] } = await openpgp.key.readArmored(privateKeyArmored);  
               await privateKey.decrypt(passPhrase);  
               const encryptedDataInAscii = Buffer.from(data, 'base64').toString('ascii');  
               const { data: decrypted } = await openpgp.decrypt({  
                   message: await openpgp.message.readArmored(encryptedDataInAscii),                
                    privateKeys: (await openpgp.key.readArmored(privateKeyArmored)).keys,   
               console.log(decrypted); // The decrypted message.  

Encryption at Skeps

We care about each bit of critical data flowing across our and our partners’ infrastructure. So, to become a secure haven, we have also identified and unleashed the benefits of blockchain private messaging, which allows us to share information with a specific target(s) without broadcasting it throughout our network, ensuring encrypted data reaches only rightful stakeholders. 

So, at Skeps, for sensitive information, we transmit encrypted content over the blockchain as private messages and for, non-sensitive information, we use the approach that has been discussed earlier, which is encrypted messages via IPFS.  

We also ensure that the passPhrases and privateKeys of each participant of our network are created and maintained with the latest and most secure algorithms/technologies.

Achieve A Stable Platform With Testing Methodologies

In this article, we have discussed why testing is necessary, how testing methodologies work, and how Skeps performs these tests.

In this article, we have discussed why testing is necessary, how testing methodologies work, and how Skeps performs these tests.

The rationale for writing tests for a piece of code is that humans are bound to make mistakes in anything they do. Rather than denying the fact, we must use this to our advantage. We can talk about how this piece of software should behave or how it should not. Both of these paradigms of thinking are essential for a full testing system.

Positive cases: defining what system should be doing to mark it as working correctly.

Negative cases: defining what system should not do to mark it as working correctly.

Ways of testing

There are different ways of testing –

Manual testing – manually testing the APIs /modules which use other small modules in it. There are some cases in which automation testing is not possible because of some constraints and, manual testing is the only way in those cases. Automated testing – automation test cases are written once test cases are defined after manual testing. These automation test cases work as future checks which, will run every time a change is made in the system. Writing automation tests helps us skip the time-consuming manual tests made on every change.

Testing methodologies from different levels of abstraction

Testing every line of code – unit test & test-driven development

There is a methodology called test-driven development in which unit tests are written even before the actual piece of code it tests. The rationale behind this is we must start with a failing test case and write a code that passes that case. That way, it is ensured that the program is doing what it intended to do. To say that a system is rigorously tested, we need to know how much of our code is covered with test cases; this is called test coverage of the code. There are many frameworks in every language which give code coverage report based on unit tests written in the code.

Who makes, thou shalt test – Dev testing

This type of testing comes literally and metaphorically between the unit and integration testing in which the creator of the code tests test cases before launching a full test cycle. It ensures that only quality code is processed by further phases of testing and avoids silly mistakes taking the testing time.    

The sum of parts is not equal to the whole – Integration testing

To say in simple terms, if two systems are working stably independently, then it does not mean that a system consisting of interactions between these two systems can also be labeled as stable without testing it. Here, the integration testing methodology comes in to picture, in which rather than running a test on each module independently as in unit tests, the module is tested as a whole – its positive and negative behavior is defined and tested separately. 

Automating everything – API automation testing

In web-based scenarios, different microservices interact with one another via Http API. Ensuring the health and consistent behavior of these APIs is supercritical. That is why to cover testing from a microservice point of view, API automation testing is done. In this testing, each API exposed by a microservice is given a predefined set of inputs and is tested against predefined behavior it should make. This behavior can be the state of the datastores after API success and the response shared by the API. Since this type of testing can mutate the data in datastores, it is done in a sandboxed environment. Preferably, it is done in docker containers consisting of separate datastores. These dockers are created before test cases and torn down after test execution, thereby ensuring a repeatable testing environment independent of the datastore state left by previous testing runs. 

What shows matters – Web/app automation testing

In a web-driven world, users interact with a piece of software via some client, be it a browser or a mobile app. To ensure a consistent system from the user’s point of view, each of these clients must be tested on each build. This can be achieved via a mix of manual and automation testing methodology. First manual testing is done to record the ideal state, which should be shown to the customer. Then, once that is done, there are web/app testing automation frameworks like selenium/appium/puppeteer that are used to automate these user UI interactions. The results of this type of UI automation testing are either complete flow is tested, and the end state is recorded, or screenshots are taken at each stage and then analyzed either programmatically or manually. This method is still faster than complete manual testing in which a dedicated person tests every user journey by clicking and tapping on a web/mobile device. Also, with the advent of multiple browsers/multiple mobile phones of different screen sizes and operating systems automation, testing scales comparatively easier than manual testing though it also requires manual intervention but not as much as manual testing. 

Humans to rule them all – manual testing

No matter how much automation frameworks we use and automated test cases we write, but they are always going to remain constant. However, in dynamic scenarios, things change in ways that are incomprehensible to machines. Automated test cases are suitable for repetitive checks, but we should think about ways a system can break. Once this case is marked as a valid test case, then it can be automated for the next releases. 

Testing at Skeps

For API testing, we use the Rest Assured java library with the TestNG testing framework.

What is TestNG

TestNG is a framework designed inspired by JUnit and NUnit. It covers all types of tests: unit tests, functional, end-to-end, integration tests, etc.

A few of the advantages of using TestNG are as follows:

  1. It supports Annotations.
  2. You can run your tests in arbitrarily big thread pools with various policies available (all methods in their thread, one thread per test class, etc.).
  3. Supports ways to ensure testing that code is threaded safe.
  4. Flexible test configuration.
  5. Support for data-driven testing with @DataProvider.
  6. Support for parameters.

Another advantage of using TestNG is that it allows grouping test cases. At Skeps, we have more than 1000 active test cases that are run before every build. During development, running this large number of test cases is time-consuming and redundant. Here, the test case grouping allows developers to run test cases only for modules that are being touched by their code. It helps in a faster development cycle where complete test cases are run once development is done. 

Sample testNG configuration testng.xml which, intends to run only ABC group test cases, would look like

<?xml version="1.0" encoding="UTF-8"?> 
<!DOCTYPE suite SYSTEM ""> 
<suite name="ABCFlowTestsSuite"> 
    <test thread-count="5" name="ABCFlowTests"> 
    <parameter name="IP" value="" /> 
    <parameter name="DealerNumber" value="13" /> 
    <parameter name="BackendServerIP" value="" /> 
    <parameter name="param1" value="124" /> 
    <include name="ABCgroup" /> 
    <class name="com.skeps.tests.stableflow.ABC" /> 

What is Rest Assured

Rest assured is a library we use at Skeps to facilitate API automation testing. It provides boilerplate code for interacting with Http APIs.  

Including Rest Assured library is as simple as adding the following dependencies in your dependency manager config,


For eg., checking response status from an API call is as simple as (illustrative code.)

import org.json.simple.parser.ParseException; 
import io.restassured.mapper.ObjectMapperType; 
import io.restassured.response.Response; 
@BeforeMethod(groups = { "ABC1" }) 
public void setup() throws IOException, ParseException { 
   Response resp = SendRequestGetResponse.sendRequestgetResponse(RequestObj, RequestUrl); 
   resp  =, ObjectMapperType.GSON); 
   Assert.assertEquals(response.getStatusCode(), HttpStatus.SC_OK); 

Coding, test cases assertion checks with this library is a breeze due to its human-readable syntax.


Assert.assertTrue(StatusCodelist.contains("001"), "Status code list does not have Status code 001");

Another good reason to use this library is that it’s readily integrated with testing frameworks like TestNG or JUnit.

How do we run tests periodically?

Tests are only helpful if they are run periodically to find the bug fast and early. We have various types of testing requirements, such as daily tests, weekly tests, etc. We use the Jenkins framework to schedule crons and also for running ad hoc crons. Once a test suite is run, its results are mailed to all dev team stakeholders.

Why we use Jenkins –

It helps streamline all crons in a single place rather than having multiple cron tabs on across various servers.  

At Skeps, Jenkins is run inside docker. Setting up your own Jenkins is as simple as running 

docker run -p 8080:8080 -p 50000:50000 -v /your/home:/var/jenkins_home jenkins 

You can read more about Jenkins at 

How we setup our testing environment

To test our systems, databases like SQL, redis, etc., are required along with code. For provisioning this, we could have created a separate isolated environment of servers which, could have achieved our purpose satisfactorily. But we took one step extra in creating a docker-compose-environment for our testing infra. It enables each of our developers as well as the testing team to spawn testing setup on their own systems and tear it down in no time. 

Typical docker-compose for a simple system consisting of REST API application and multiple databases is like –

version: '2' 
    LOG_LEVEL: debug 
    image: <your application image > 
    - 80:80 
    - database 
    - redis 
    - database 
    - redis 
    image: mysql 
    - 3306:3306 
    image: redis 
    - 6379:6379

Above mentioned docker-compose would spawn an API service of the given docker file along with MySQL database and Redis. The API service would be dependent on MySQL and Redis. Spawning our test infra requires a lot more services but is clearly straightforward.

Test Reports

Tests are as good as the action that is taken on their failure. So to keep everyone apprised of the testing status. We mail the periodic test reports to our dev teams and managers in a very concise and crisp format. This helps in easily analyzing reports.

An Introduction To Decentralized Storage Systems

Read on to understand some of the decentralized storage systems and their workings understand how they work.

In this blog post, we are going to dive into some decentralized storage systems and understand how they work.

For a scalable decentralized web, one of the crucial underlying layers is its storage system. In a decentralized storage system, instead of storing all the data in a centralized server, the data is distributed into different chunks and stored inside various nodes of a peer-to-peer (P2P) network. There have been a few successful distributed file-sharing systems like Bit torrent, Napster, etc. in the past. However, these applications were not designed as infrastructure to be built upon. The advantage of using such storage is to serve the same purpose as a decentralized web (i.e., security, privacy, no single point of failure, cost-effective).

Skeps’ architecture is based on private blockchain and decentralized storage systems. Private blockchain provides efficiency and transaction privacy, and decentralized storage helps to store files securely.

In this blog post, we are going to dive into some decentralized storage systems and understand how they work.

IPFS (Interplanetary File System)

IPFS is currently one of the most talked-about technology in the DApps ecosystem because it can replace HTTP. It is a content-addressed storage system – for every object (example files, pictures, video) stored, a unique cryptographic hash is created, and the same hash is required to fetch these objects. Similar to URI on the web, it is a distributed file system that seeks to connect all computing devices with the same system of files. It is inspired by previous successful peer-to-peer systems, including DHTs, BitTorrent, Git, and SFS.

IPFS is also one of the vital components in Skeps’ decentralized architecture. IPFS, along with asymmetric encryption, helps Skeps to share files securely between nodes.

Filecoin digital currency is created to incentivize data storage on the IPFS network. Filecoin protocol provides data storage and retrieval service via a network of independent storage providers. These storage providers do not rely on a single coordinator, where clients pay to store and retrieve data, storage miners earn tokens by offering storage, and retrieval miners earn tokens by serving data.


Swarm is also a distributed storage platform and content distribution service. It is a part of the Ethereum ecosystem for the decentralized web that consists of three components – 

  1. Whisper for Messaging 
  2. Ethereum for the computation power 
  3. Swarm for storage 

It is similar to IPFS in terms of using hash-based content addressing but has a built-in incentive layer. The incentive layer uses peer-to-peer accounting for bandwidth, deposit-based storage incentives, and allows trading resources for payment.

Ethereum Swarm Bee is the second official Ethereum Swarm implementation that uses libp2p as the network layer as opposed to its first implementation, which uses devp2p.


The Storj network is a robust object store that encrypts, shards, and distributes data to nodes around the world for storage. Its core technology is an enforceable peer-to-peer storage contract – a way for two people to agree to exchange some amount of storage for the money. The contract has a time duration over which the renter (storage consumer) periodically checks that the farmer (storage provider) is still available. The renter pays the farmer for each proof it receives and verifies. On completion of the duration of the contract, both parties can renegotiate or end the contract.

Storj aims to replace Amazon S3, which is currently one of the most widely used centralized cloud storage solutions, and hence, uses S3 compatible APIs.


Sia is a decentralized cloud storage platform that provides peers to rent storage from each other. Sia itself stores only the storage contracts formed between parties, defining the terms of their arrangement. A blockchain, similar to Bitcoin, is used for this purpose.

By forming a contract, a storage provider (also known as a host) agrees to store a client’s data and to periodically submit proof of their continued storage until the contract expires. The storage provider compensates for every evidence the client shares and penalizes for missing evidence. Since the evidence are publicly verifiable (and are publicly available in the blockchain), network consensus can be used to enforce storage contracts automatically. The clients do not need to verify storage proofs personally; they can upload their files and let the network do the rest.

The decentralized storage solutions discussed above are used to store static data, but web apps are highly dependent on dynamic data.

Let us discuss some database solutions for a decentralized web.


OrbitDB is a serverless, distributed, and peer-to-peer database (each peer has its own instance of a specific database). OrbitDB uses IPFS as its data storage and IPFS Pubsub to automatically sync databases with peers. A database is replicated between the peers. It automatically results in an up-to-date view of the database upon updates from any peer. In OrbitDB, the data should be stored, partitioned, or sharded based on the access rights for that data. It supports multiple data models, namely Key-Value, Log (append-only log), Feed (same as log database but entries can be removed), Documents (store indexed JSON documents), and counters.

OrbitDB databases are eventually consistent, achieved with conflict-free database merges (CRDTs).


GUN is a real-time, decentralized, offline-first, graph database. It allows for data synchronization to happen seamlessly between all connected nodes by default. The idea behind the GUN is to offer a decentralized database system that offers real-time updates with eventual consistency.

The flexible data storage model allows for tables with relations (MSSQL or MySQL), tree-structured document orientation (MongoDB), or a graph with circular references (Neo4j).

Other Dynamic databases include AvionDB, ThreadDB, and Secure Scuttlebutt.


At Skeps, we are building our customized decentralized database, ChainwolfDB. It is inspired by OrbitDB to use IPFS pubsub as the underlying layer and maintains a mongo instance on each peer to store data. MongoDB, the data layer, provides persistence and dynamic schema for the data to be shared. It uses PGP encryption for syncing data to other nodes in a secure manner. It also supports transactions that help us to make sure that either all requested nodes receive the data or none gets to act on it.

How We Handle Blockchain Reconnects In web3js

If your team uses Web3JS in production, then you must be aware that there’s no inbuilt reconnect functionality in Web3JS to handle blockchain disconnects or restarts.

If your team uses Web3JS in production, then you must be aware that there’s no inbuilt reconnect functionality in Web3JS to handle blockchain disconnects or restarts.

In this article, we’ll learn how to handle blockchain disconnects in the production environment automatically, using Web3JS. The method described below works for Web3JS version 1.0.0-beta.35, but it should be good for the stable 1.2.* versions as well.

Problem Description

If your team uses Web3JS in production, then you must be aware that there’s no inbuilt reconnect functionality in Web3JS to handle blockchain disconnects or restarts. So, usually, when there is a connection drop, NodeJS service needs to be restarted as well to connect to the blockchain again. Not a very practical approach. 


Let’s see how we can gracefully handle blockchain disconnects in NodeJS. In the Web3JS library, the Provider object gives us events for

1) Connect – connection established
2) Error – provider errors
3) End – provider connection ended.
Upon disconnection, we can utilize the end event to reinitiate a new Web3JS connection. Let’s look at an example to understand this: 

File connection.js

In this file, we’ll handle the connection between NodeJS and blockchain. We will have a newBlockchainconnection method which will return a Web3 active connection object.

const web3 = require("web3"); 
let hasProviderEnded = false, web3Instance, reconnectInterval = 10000;  
async function newBlockchainConnection(webSocketProvider, endCallback) {  
        // create new provider 
        const provider = new web3.providers.WebsocketProvider(webSocketProvider);  
        hasProviderEnded = false; 
        // connect event fires when the connection established successfully. 
        provider.on('connect', () => console.log("connected to blockchain")); 
        // error event fires whenever there is an error response from blockchain and this event also has an error object and message property of error gives us the specific reason for the error 
        provider.on('error', (err) => console.log(err.message)); 
        // end event fires whenever the connection end is detected. So Whenever this event fires we will try to reconnect to blockchain 
        provider.on('end', async (err) => {  
                // handle multiple event calls sent by Web3JS library  
                if (hasProviderEnded) return;  
                // setting hashProviderEnded to true as sometimes the end event is fired multiple times by the provider 
                hasProviderEnded = true;  
                // reset the current provider  
                // removing all the listeners of provider. 
                setTimeout(() => {  
                         // emitting the restart event after some time to allow blockchain to complete startup 
                         // we are listening to this event in the other file and this callback will initialize a new connection 
                }, reconnectInterval);  
        if (web3Instance == undefined) web3Instance = new web3(provider);  
        else web3Instance.setProvider(provider);  
        return web3Instance;  
module.exports = {  

File app.js 

const connection = require("connection"); 
const web3JSConnection;  
const endCallback  = async function () {  
        web3JSConnection = await connection.newBlockchainConnection('ws://', customEvent);  
 async function getWeb3Connection() {  
         if (web3JSConnection == undefined) web3JSConnection = await connection.newBlockchainConnection('ws://', endCallback);  
     return web3JSConnection;  
module.exports = {  


On blockchain disconnection, when the provider triggers the ‘end’ event, we are triggering a callback after a timeout. The event then, in turn, calls the function to create a new blockchain connection.

Some points to note:

1) Sometimes Web3JS sends you multiple ‘end’ events for the same connection drop, so we have to check if we have already handled the disconnection event once

2) Setting the new provider in the web3 instance object using ‘setProvider’ instead of creating a new web3 instance

3) Resetting the provider and removing active listeners

4) Reconnect interval must be at least 5 seconds as usually, it takes about 5 seconds for the blockchain to restart. 

Skeps is a technology-first company and we are developing a revolutionary product that can better consumer financing for both retailers and lenders. Stay tuned till our next write-up on the technology we’re working on.

Deploying Blockchain Applications with Docker

Docker provides great support in quickly getting a blockchain node up and running without the need to individually configure each machine separately.

Docker provides great support in quickly getting a blockchain node up and running without the need to individually configure each machine separately.

Blockchain and Docker are technical innovations with tremendous potential in developing and maintaining application software.

Blockchain refers to a type of database architecture in which data is stored in a distributed fashion on a decentralized system of nodes. Blockchain is a revolutionary technology because it helps to decrease financial data leak risk, curtails fraudulent transactions, and increases transparency in a scalable way.

Docker is an open-source software for deployment and development of applications within containers. These containers allow developers to emulate applications regardless of the technical environment.

Simply put — it is Build, Ship and Run any application, anywhere.

This post is an attempt to explore docker and its application for blockchain technology. We sincerely hope this piece provides a basic understanding of the usage and merits of these emerging technologies.

A real-time challenge and a possible solution

In an organization, development and operations teams function in separate environments. The code/software application which works fine in the developer’s machine might not behave similarly in the operations machine due to varied configuration, version & build difference of the supporting software, etc.

In such a scenario, the docker container provides a possible solution by packaging everything required to make the software run.

Docker enables you to build a container image which when run becomes a Container. The users can utilize this image across different release and development phases which would help to standardize the environment.

Container image can be distributed among team members or an organization multiple times to ensure that the environment is constant and behaves the same to better anticipate, identify and resolve the issues.

The image can also be pushed to docker Hub ( and pulled back as per the requirement (Anytime-Anywhere)

Technicalities – How does docker work?

Docker has three important components

  1. Docker file
  2. Docker Image
  3. Docker Container

Docker file contains the specifications (code, runtime, system tools, system libraries, and settings) required for the container.

A docker container image is a lightweight, standalone, executable package of software that includes code and necessary information/specifications needed to run an application.

A container is a standard unit of software that packages the code and all its dependencies, so the application runs quickly and reliably from one computing environment to another. Container images become containers at runtime.

Dockerizing Process

Let’s get started! Installing docker

Docker can be installed on a Linux Ubuntu machine by using the following commands. The following commands have been arranged such that it can be copied in a shell script and executed directly. (Reference*)

sudo apt-get install \
apt-transport-https \
ca-certificates \curl \
gnupg-agent \
curl -fsSL | sudo apt-key add
sudo apt-key fingerprint 0EBFCD88
sudo add-apt-repository \
“deb [arch=amd64] \
$(lsb_release -cs) \
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli

Congratulations! You have successfully installed a docker. You can test the installation by typing $ sudo docker -v which would give an output of the version of the docker.

Creating a docker file and building it

  • Create a dockerfile
    Pulling a tomcat image to exemplify.
FROM tomcat:8.5.45-jdk8-openjdk
CMD [“”, “run”]
  • Create an Image
$docker build -t <name>

Check the list of images created

$ docker image ls
  • Convert Image to Container
$ docker run -p <port Number>:<port number> <Container Name><Image Name>
  • Check the list of containers running
$ docker container ps

Dealing with backup and Data Recovery

All looks fine as of now! You might be wondering what happens to the data in case of a container crash. Fortunately, docker provides a mount option where data can be saved in either volumes or bind mounts. Bind mounts are dependent on the directory structure of the host machine whereas volumes are completely managed by the docker.

  • Create a Volume
$ docker volume create volumename

The volume name is provided while running the image to a container.

$ docker run -d name=containername mount source=volumename,destination=path imagename
  • Check Logs

You can also check logs of your container by

$ docker logs –f containername
  • Pushing an image to Docker Hub

Login to Docker Hub

$ docker login

Execute the command to push

$ docker push repository-name/imagename

Frequently used docker container commands at a glance

$ docker start containername
$ docker stop containername
$ docker restart containername
$ docker inspect containername
$ docker exec -it containername /bin/bash
$ docker commit containername newimagename
$ docker save containername > containername.tar
$ docker load — input containername.tar
$ docker logs -f containername

Docker repository push commands

$ docker pull repositoryname/imagename
$ docker push repositoryname/imagename

Deploying Blockchain using docker Hub

Given the nascent nature of the blockchain ecosystem, scaling up a blockchain network in an enterprise environment can be challenging. This is where docker provides great support in quickly getting a blockchain node up and running without the need to individually configure each machine separately.

Docker Hub provides images for all major enterprise blockchain networks:

These images are a great starting point and can take away a lot of effort required for setting up the blockchain network. At the same time, building your blockchain network using docker ensures that the system can be easily scaled up or down without the usual headaches of managing a blockchain.

In a nutshell, docker is a free open source that facilitates faster software delivery and ensures consistency among different environments. Thus, saving time, effort and money. What can be more wonderful than this!

Skeps is working on some ground-breaking technology to make a dent in the financial services industry. To know more about the product we are developing, please visit our homepage.

/ * Reference — */