Reconnecting To IPFS Pubsub After Connectivity Issues

There are multiple ways to communicate between different nodes. Read on to know what we use to transfer information between different nodes.

Being a decentralized platform, we usually need a way to communicate between different nodes. There are multiple ways to do the same. We use blockchain events for transferring sensitive information between nodes. But for all the other information, we can use ipfs pub/sub which, is based on publisher-subscriber pattern often used to handle events in large-scale networks. Being a relatively new feature in IPFS, IPFS pubsub subscription reconnects are not officially supported. In this article, we will learn to handle the IPFS pubsub subscription disconnect in the production environment. The version of ipfs used is 0.4.16.

Let’s see how we can gracefully handle ipfs subscribe disconnects in Nodejs.
We are using the nodeJS HTTP package for making the HTTP calls as ipfs provides us an HTTP endpoint to listen to the subscribed topic. Calling an HTTP get request to provide us a response stream that can trigger multiple callbacks. The stream provides us with the following events.

  1. data :- provides the data coming from the stream
  2. error :- provides error
  3. end :- called whenever the connection is terminated

We are storing each request in a subscription object by generating a random id as its key.

File subscriptions.js

In this file we will handle the ipfs subscriptions.

This file returns a subscribe method which can be used anywhere in the project to subscribe on ipfs topic.

const subscriptions = {​}​;
module.exports.subscribe = async (topic, callback, id = null) => {​
    if (id == null) id = new Date().getTime() * 100 + Math.floor(Math.random() * 99);
    const reqObj = http.get('http://localhost:9096/pubsub/sub?arg=${​topic}​&discover=false, async (res) => {​
        res.on("data", async (msg) => {​
            try {​
                const str = Buffer.from(msg, 'base64').toString('utf-8');
                const strJSON = JSON.parse(str);
                const data = strJSON.data;
                callback(data);
            }​ catch (err) {​
                console.log(err);
            }​
        }​);
        res.on("end", async () => {​
            if ("isCanceled" in subscriptions[id] && subscriptions[id].isCanceled == true) {​
                delete subscriptions[id];
                return;  
            }​
            await cancel(id);
            setTimeout(async () => {​
                try {​ await subscribe(topic, callback, id); }​ catch (_) {​ }​
            }​, 10 * 1000);
        }​);
        res.on("error", async (err) => {​
            console.log("unable to add subscription on " + topic);
            console.log(err.message);
            setTimeout(async () => {​
                try {​ await subscribe(topic, callback, id); }​ catch (_) {​ }​
            }​, 10 * 1000);
        }​);
    }​);
    reqObj.topic = topic;
    subscriptions[id] = reqObj;
    console.log("subscription added on " + topic);
    return id;
}​
const cancel = module.exports.cancel = async (id) => {​
    if (id in subscriptions) {​
        if ("abort" in subscriptions[id]) try {​
            subscriptions[id].isCanceled = true;
            subscriptions[id].abort();
        }​ catch (_) {​ }​
        console.log("subscription removed on " + (subscriptions[id].topic || id));
    }​
}

The events we are more concerned about are Error and End.

Let us discuss the ‘error’ event first. This event is called whenever there is some error while making the HTTP call, and in this case, we only need to retry for the ipfs subscription HTTP call. In this case, we have added some delay using setTimeout and called the method recursively also passed the previous id so that the subscription object gets overwrite.

Let us discuss the ‘end’ event now. The end event is called when the request is terminated – there is no more data to be sent in the response. In this case, we also need to subscribe again on the same topic as we did in the ‘error’ event. But in this case, we also need to abort the previous stream explicitly. Suppose, if there is some data to be received in response, it should not come more than once.

Let us discuss the ‘data’ event now. The data received from ipfs is base64 encoded. After decrypting, we are parsing it as JSON as we follow a policy that we will always send JSON data in ipfs pubsub. However, the same is not mandatory. After this, the callback is called with the JSON object.

The subscription object can also be saved in a database and re-initialized upon a service restart. For our case, we’re simply storing the subscriptions in memory.

FinTech Dictionary Of Terms And Acronyms

The complete FinTech Dictionary of Terms and Acronyms – To help you understand the FinTech industry and its jargons a little better.

People wonder exactly what ‘FinTech’ dictionary meaning is. FinTech is a short form for ‘financial technology’ – something that has actually been around for a very long time. Simply put, FinTech refers to the technology that is driving innovation in the financial services industry. The term has become far more widely used in the last five years, with new terminology associated with FinTech also springing up.

New buzzwords, jargon and acronyms are commonplace when talking about the industry, and so we hope to explain, in clear terms, exactly what some of them mean in this exhaustive FinTech Dictionary.

Banking and Payments

A2A (Account-to-Account): Payments that involve the transfer of funds between two accounts owned by a single party.

Accounts Payable (AP): Amounts due to vendors or suppliers for goods or services received.

Accounts Receivable (AR): Amounts owed for goods or services delivered that have not yet been paid for.

ACH (Automated Clearing House): An electronic network that coordinates automated money transfers and electronic payments. It is a way to move money between banks without using wire transfers, paper checks, card networks, or cash.

ACH Authorization: A payment authorization that gives the lender permission to electronically take money from your bank, prepaid card account, or credit union when your payment is due.

ACH Credit: A transaction pushing funds into an account.

ACH Debit: A transaction pulling funds from an account.

ACH Return: A credit or debit entry initiated by an RDFI or Receiving Depository Financial Institution or ACH Operator that returns a previously originated credit or debit entry to the ODFI or Originating Depository Financial Institution within the time frames established by NACHA or National Automated Clearing House Association rules.

ACH Reversal: An entry, (credit or debit entry) that reverses an erroneous entry. It must be made available to the RDFI within five banking days following the settlement date of the erroneous entry.

Address Verification Service (AVS): A security system that works to verify that the billing address entered by the customer is the same as the one associated with the cardholder’s credit card account.

B2B (Business-to-Business): A model where a transaction or business is conducted between one business and another, such as a manufacturer and retailer.

B2B2B (Business-to-Business-to-Business): A model where a business indirectly sells to another business through a middleman, such as a manufacturer sells to a wholesaler who then sells to a retailer.

B2B2C (Business-to-Business-to-Consumer): This is an indirect distribution – it is a model where a business accesses the consumer market through another business, such as IT services to bank to bank’s customers.

B2C (Business-to-Consumer): Also known as direct-to-consumer – it is a model where a business sells products or services to customers without a middleman.

Bank Identification Number (BIN):  The term bank identification number (BIN) refers to the initial four to six numbers that appear on a payment card. This number identifies the institution that issues the card and is key in the process of matching transactions to the issuer of the charge card. It may also be referred to as an Issuer Identification Number (IIN).

C2B (Consumer-to-Business): A model where consumers provide a product or service to businesses. This is a rapidly growing model and often takes the form of brand sponsorships on social media.

C2C (Consumer-to-Consumer): A model where payments take place between two different consumer accounts for goods or services. This is done often through an online marketplace like eBay, Etsy, or Craigslist.

Closed Loop Payment System: A system that operates without intermediaries, where the end parties have a direct relationship with the payments system.

Credit Bureau: A company that collects, researches, and maintains credit information, and sells that data to lenders, creditors, and consumers in the form of credit reports. The most recognizable credit bureaus are Equifax, Experian, and TransUnion.

Credit Score: A three-digit number that represents how likely a person is to pay back a loan based on their payment history. A higher score is better.

Digital Wallet: A software application used with a mobile payment system to facilitate electronic payments for online transactions as well as purchases at physical stores.

Funding Source: Any financial institution, bank, or other funding entity providing liquidity to accommodate various payment flows.

Good Funds: Funds considered equivalent to cash and guaranteed to be available upon demand.

Issuer/Issuing Bank: Any financial institution (a bank or credit union), which offers a payment card (credit or debit cards) directly to consumers (or organizations) and is liable for the use of the card. This financial institution is also responsible for the billing and collecting of funds for purchases that were made using that card.

Ledger: A book in which the monetary transactions of a business are published in the form of debits and credits.

Merchant: A retailer, or any other person, firm, or corporation that agrees to accept credit cards, debit cards, or both.

Origination: The process by which a consumer applies for a new loan, and the lender or card issuer processes that application.

Originator (ACH): The entity that starts an ACH payment transaction. The Originator is the consumer, business, or government organization that initiates the payment process and is authorized to do so.

P2P (Peer-to-Peer): A decentralized platform where two individuals interact directly with each other, without intermediation by a third party. Instead, the buyer and the seller transact directly with each other via the P2P service.

Personal Identification Number (PIN): A confidential individual code used by a cardholder to authenticate card ownership for ATM.

Point-of-Sale (POS): The specific time and place where a retail transaction is completed.

Same Day ACH: Delivery of available funds within the same business day.

Settlement: The movement of funds from one financial institution to another, which ultimately completes a transaction.

Underwriter: A party that evaluates and assumes another party’s risk for a fee.

Sales

CRM (Customer Relationship Management): One of the many different approaches that allows a company to manage and analyze its own interactions with its customers.

Horizontal Market: A non-specialized market that covers a wide range of industries.

LOI (Letter of Intent): A document declaring the preliminary commitment of one party to do business with another. The letter outlines the chief terms of a prospective deal.

Software and technology

API (Application Programming Interface): A computing interface which defines interactions between multiple software intermediaries.

BaaS (Banking as a Service): The supplying of complete banking processes that allows brands to easily embed financial services into their products without having to worry about building banking infrastructure or obtaining a license.

Encryption: The technique of scrambling sensitive data automatically in a terminal or computer before transmission for security purposes using an algorithm and key.

POP (Point-of-Purchase): occurs when a consumer check is received at a point of sale and converted into ACH immediately.

POS (Point-of-Sale): occurs when a consumer initiates a payment, typically via card, at a point of sale.

SaaS (Software as a Service): A software licensing and delivery model in which software is licensed on a subscription basis and centrally hosted.

FinTech has been a buzzword in the world of finance, and the market is maturing. In case you want to know how the FinTech industry is evolving, have a look at this infographic.

(Disclaimer: This (FinTech Dictionary) will be updated periodically. In case you want us to add new terms and definitions, please reach out to me at swati@skeps.com.)

Making Finance Smarter With Smart Contracts

Smart Contracts are up-and-coming technology that can revolutionize our financial landscape, and leaders are cautioned to understand it before jumping.

Smart Contracts are up-and-coming technology that can revolutionize our financial landscape, and leaders are cautioned to understand it before jumping.

Before understanding Smart Contracts, you need to know what blockchain is. A blockchain is a form of a distributed immutable ledger. Distributed means it can be hosted on several servers to make it highly available. Immutable means that one cannot edit the data written in the ledger book. An entry is required to the blockchain to make any change. It brings versioning of data by itself. Everyone who is part of the blockchain network has access to all history of changes in the data.

Now, what is a smart contract?

A smart contract is a program that is stored on the blockchain which executes actions when a specific event occurs. This event can have an impact on backend calculations. Smart Contracts contain terms and conditions agreed upon by the participants to render services, transfer assets, and record transactions. They are designed to be run by the participants without the need for intermediaries.

What are the current issues being faced by the finance industry?

Due diligence and bookkeeping take most of the resources in finance. Since money is at stake, every penny should add up, and every piece of information should be correct. Reconciliation becomes challenging when these details are dispersed in multiple systems. Inter entity communication requires even more verifications to prevent fraud. Any information shared by external entities is subject to verification, which adds to extra costs for a company. Taking swift action on new information involves human efforts. For example, the transfer of deeds of the property after successful loan disbursement still requires multiple intermediaries.

In the underwriting industry, underwriters are a line of defense between business and loss. Each underwriter has to go through multiple documents and pieces of information. It is a time-consuming process, and when coupled with increasing workloads, standards degrade.

In the loan industry, all the loans in the mortgage pools are verified with actual documentation. These audits are very costly, ranging up to 100$ each and making it cost-prohibitive for smaller amounts of loan pools. Although these pools are sampled, only less than 1% of the loans are verified, introducing uncertainty in diligence results. What this means for finance is, nobody can tamper with the data stored on blockchains. If anyone tries to tamper with the data, everyone in the network will know about it, bringing transparency and reducing multiple verifications at every step.

How do Smart Contracts fit into the financial landscape?

Since Smart Contracts are short programs that execute when any condition changes in the blockchain, these can serve multiple purposes. For example, in an insurance company, the data is stored on a blockchain, and after every five years, the company changes the insurance premium. In such a case, Smart Contracts use the date of birth stored on the network.

What are the uses of Smart Contracts?

Since data is already stored in an immutable ledger, Smart Contracts can be run on it to ensure 100% diligence to the underwriting process. Once data is verified, its results can itself be stored on the blockchain to prevent future revalidations. Having loan information on the blockchain ensures that Smart Contracts can be used to outreach downgraded loan borrowers in time before the loan becomes a non-performing asset.

Smart Contracts can gather documents from blockchain and submit them for claims immediately upon a preconfigured triggering event leading to faster and consistent payouts. Submitting loan documents on blockchain enhances security, transparency and lowers fraud risk. Though they may seem words to us, these are costly operations in finance.

An independent 3rd party can audit the code of Smart Contracts to verify the integrity of the agreement. It also maintains data integrity because of the immutable nature of the blockchain. These can also be used to compile monthly surveillance performance reports.

The nots of Smart Contracts

Smart Contracts’ legal validity is enforced in 47 states of the United States. However, the United kingdom legislature is still researching to understand the Smart Contracts landscape. Similarly, many countries are still working on legal frameworks around blockchain technology. Like any other technology, Smart Contracts are made by humans and prone to bugs. It can jeopardize large amounts of money when stakes are high. Even though blockchain is distributed in nature, it is still susceptible to scaling challenges – meaning, as the amount of data and nodes increase in the blockchain, the execution speed of Smart Contracts decreases.

Conclusion

Smart Contracts are up-and-coming technology that can revolutionize our financial landscape. Amongst all the blockchain hype, leaders are cautioned to understand its pros and cons before jumping.

If all you have is a hammer, everything looks like a nail!

The Top FinTech Partnerships In 2020

FinTech partnerships have the opportunity to explore the transition to digital transactions and gain market share.

The partnerships in the FinTech industry in 2020 have availed themselves the opportunity of exploring the rapid transition to digital transactions and new markets, to consolidate their position in the value chain and further gain market share.

The FinTech ecosystem has seen some exciting disruptive forces create havoc over the last five years. COVID season, in particular, has pushed organizations out of their comfort zones from a “winner take all” attitude towards collaboration to witness tremendous benefits for both sides. The trend goes back to its advent in early 2019, and since then, it has been completely changing the dynamics of the financial services industry.

The size of the market

The uncertain times through 2020, as we all know it, can be seen as both a threat and an opportunity where investors have focussed on late-stage maturity trends. A healthy flow of 1,221 FinTech deals between January and June amounted to $25.6bn inclusive of venture capital investment and mergers & acquisitions (M&As) towards digital-native FinTech solutions transforming banks by and large. Simple customer patterns such as the transition to online transactions, high-speed and high-fidelity cloud-based solutions, and high-powered mobile devices have made the customer journey a lot leaner than it used to be with traditional banks. The general trends of M&As have been across the payments, lending, and banking space, primarily in the interest of young organizations exploring an added footprint within banking.

The partnerships in the FinTech industry in 2020 have availed themselves the opportunity of exploring the rapid transition to digital transactions and new markets, to consolidate their position in the value chain and further gain market share.

Digital transformation calls for enhanced capabilities

SMB lending disruptor Kabbage recently forayed into banking with a debut of its checking account interconnected to a range of digital banking services including, eWallets and bill payment services. American Express acquired FinTech Kabbage at a rumored figure of $850 million to empower small businesses with technology solutions to help manage their cash flow and focus on growing their businesses.

In a race to substitute the card-based business model running out of steam, the acquisition talks noticed across the COVID season was Visa potentially acquiring Plaid for a whopping $5.3 billion. Although this is currently on hold due to antitrust regulatory concerns, it is sure to resurface. In a similar context, Mastercard has successfully posted the acquisition of Finicity at a much lower figure of $825 million.

The quest for new market opportunities

Competitor acquisition has also been on-the-high during 2020 for obvious reasons. Although the deals are not cheap, the strategic economic impact is high. Take the example of Worldline acquiring French competitor Ingencio for $8.6 billion to increase market reach in the payments space. To consolidate its position on other sides of Europe and enable more local payments (and strengthen the partnership with Czech Republic-based Komercni Banka), Worldline has also acquired a majority stake in GoPay, a well-known payment gateway in Eastern Europe.

On the other extreme, with technology companies offering Banking as a Service, Stripe stands out with its recent partnerships with Shopify and Evolve Bank & Trust to provide business accounts designed for merchants. Stripe also continues to invest heavily in its treasury infrastructure to strengthen its new partnership with Goldman Sachs. Similarly, hardcore technology-backed M&A was witnessed in the Southeast Asian market where cloud service company Infor partnered with DBS Bank in Singapore. The partnership emerges as a peculiar one where a FinTech company will augment its digital-first solution with the trade-financing capabilities offered by a bank.

The bottom line

Statistics help visualize $14 billion worth of deals in Q1 of 2020 and subsequently an $11.7 billion worth of Fintech deals down by 43% and 21% year-on-year in the same quarter. Overall, the negative impact of coronavirus outbreak comes coupled with accelerated digital trends. Increased demand for digital transformations at banks and financial services organizations has led to a surge in cashless payments. While on the other hand, few FinTech services have forced the traditional organizations to double down on FinTech investments.

Logging In Microservices – Best Practices

Learn some best practices to follow while logging microservices .

In this article, we will examine some best practices to follow while logging microservices and the architecture to handle distributed logging in the microservices world.

Microservices architecture has become one of the most popular choices for large scale applications in the world of Software Design, Development, and Architecture, basically due to the benefits over its traditional counterpart, the monolithic architecture. These benefits arise due to a shift from one single large, tightly coupled unit (monolith) to multiple small loosely coupled services, wherein each service has limited and specific functionality to deliver. So, with smaller codebases, we get to leverage the power of distributed teams due to decreased dependencies and coupling, in turn reducing the time to market (or production) of any application. Other advantages include language-agnostic stack and selective scaling.

Logging in microservices comes with other advantages that can be shipped with the architecture, it also comes with its own set of complexities – the reason being a single request in this architecture could span across multiple services, and it might even travel back and forth. To trace the end-to-end flow and identify the source of originating errors of a request through our systems, we need a logging and monitoring system in place. We can adopt one of the two solutions, a centralized logging service and an individual logging service for each service.

Individual logging service VS Centralized logging service

Individual logging solutions for each service can become a pain point when the number of services starts growing. Because for every process-flow that you intend to look the logs for, you might need to go through each of the service logs involved in serving this process request, hence, making issue identification and resolution a tough job. However, on the other hand, in a centralized logging service, you have a single go-to place for the same, which, backed by enough information around logs and a thought-off design, can do wonders at achieving the same.

At Skeps, we use centralized logging solutions for our applications running on microservice architecture.

Centralized logging service

A single centralized logging service that aggregates logs from all the services should be a more preferred solution in a microservices architecture. In the software world, unique/unseen problems are not seldom, and we certainly do not want to be juggling through multiple log files or developed dashboards to get insights about what caused the same. While designing a standard centralized logging scheme, one could or in fact should refer to the following norms:

Using a Correlation Id for each request

A correlation id is a unique id that can be assigned to an incoming request, which can help us to identify this request uniquely in each service.

Defining a standard log structure

Defining the log structure is the most crucial part of logging effectively. In the first place, we need to identify why are we enabling logging? A few points could be:

  1. How did each service respond while delivering on its front – whether it succeeded or caused errors? Whatever the case may be, our aim should be to get most of the context around that.
  2. What process/function from the service generated the log?
  3. At what time during the process was the log generated.
  4. How crucial is the process that generated the log?

While answering these questions, we get to derive a format, which can include, but is not limited to the following things:

  1. Service name
  2. Correlation Id
  3. Log String (can include a short description, name of the generating method)
  4. Log Metadata (can include error stacks, successful execution response(s) of a subtask)
  5. Log Timestamp
  6. Log Level (DEBUG, WARN, INFO, ERROR)

Customized Alerts

When something critical breaks, we do not want it to get stored in our databases without us getting to know about it in real-time. So, it is pivotal to set up notifications on events that indicate a possible problem in the system, and its categorization can be done by keeping a reserved log level.

A set of rich querying APIs or Dashboard

Post storing all the information, it is important to make sense out of the stored information. Rich APIs can be developed to filter based on correlation id, log level, timestamp or any other parameter that can help to identify and resolve issues at a much faster pace.

Decide a Timeline to Clean Logs

Decide upon an appropriate timeline to clear the clutter and do cut down on your storage usage. This also depends on your application’s need and the reliability of your shipped code that has been in the past. This is when you are not storing any sensitive information. Otherwise, you need to follow the compliances in place. However, there are workarounds for that. You can keep an additional field to store such information, and only that can be cleared from each of the stored logs as per the compliance timeline.

How to make log aggregator fail-safe?

There can be instances when the logging service (log aggregator) goes down, and it becomes a single point of failure for our analysis and debugging needs. We do not want to miss out on any logs that were generated during that outage. So, we need to develop an alternate mechanism that stores those logs until our aggregator is back online.

Additional Pointers

  • It is advisable to generate logs asynchronously, to reduce the latencies of our process flows.
  • A pattern known as observability eases out the process of monitoring distributed environments. Observability includes application log aggregation, tracing, deployment logs, and metrics for getting a whole-some view of our system.
  • Tracing identifies the latencies of the involved processes/components/services and helps to identify bottlenecks. This allows us to make better decisions regarding the applications scaling needs.
  • Deployment Logs – Since microservice, we have multiple deployable services, we should also compile logs of the deployments that are made.
  • Metrics refers to the performance and health of the infrastructure on which our application/services rely on. An example of the same could be the current CPU, memory usage.

2020 Recap: Highlights And Milestones

2020 Recap: 2020 was particularly notable for Skeps because of a series of events that happened. Below infographic says it all!

2020 has been a challenging year for everyone, but it was particularly notable for Skeps because of a series of events that happened in 2020. The below infographic says it all!

What’s the one word that comes to your mind when you sum up 2020? Unprecedented. Unpredictable. Maybe overwhelming? Perhaps all of these. From coronavirus reaching every corner of the world and, in the absence of a widespread treatment or vaccine, millions losing their lives, we have never seen a future more uncertain. In such a scenario, Skeps is fortunate to have a product that is even more meaningful in this unsettling time. So, let us rewind and look at the year gone by. 

From new team members, both in India and the U.S., to new clients, 2020 has been a year of immense growth, learning, and opportunities for the Skeps team, and we are excited to share with you, our extended Skeps family, our fondest memories and key milestones of the year.

2020 Year In Review

We are proud of our 2020, and we could not have done it without our TEAM, all our CUSTOMERS, family members, friends and well-wishers. Thank you for joining us in our work to transform the financial ecosystem! 

Our mission continues in 2021, with more ambitious goals for our customers and our team. Want to join our mission? We’re hiring! Visit Skeps Careers to learn more.

How Easy It Is For A Merchant To Integrate With Skeps?

Integration with Skeps is very simple. Read on to know how easy it is for a merchant to integrate with Skeps.

All you need to do is provide us with two docker-enabled servers, and our team will do the rest. Skeps also provides two business APIs that is integrated with your portal with which you can perform the below actions:

  1. Check Approval – To get loan offers for the customer.
  2. Share Lead – To share the selected offer by the customer and get the buyer’s URL where the customer can fill in the required details and get the loan.

Skeps provides you with a dashboard where you can monitor the below things:

  1. Evaluate your leads – You can test the offers you will get from the buyers for your customers by uploading the data of your customers in a CSV file.
  2. Manage Buyers – You can choose with which buyers you want to share your leads with, check the history of evaluated leads with the buyer and their status.
  3. Check Metrics – You can do an in-depth analysis of the leads in a given timeframe like the approval rate, average fico, shared rate, average shared loan, etc.
  4. Check History – You can get all the information related to a lead like current status, offers received, loan amount, the buyer with which the lead was shared, etc.

Also, the client is free to add any more APIs required for reconciliation purposes, as per need. Skeps can develop and integrate them for a specific client.

Securing P2P File Transfers

Learn how to secure P2P file transfers within clients using IPFS (InterPlanetary File System) and blockchain private messaging.

Learn how to secure P2P file transfers within clients using IPFS (InterPlanetary File System) and blockchain private messaging.

This article takes you through the process of securing P2P communication within clients using a medium such as IPFS (InterPlanetary File System) and blockchain private messaging. Let’s try to develop an understanding of how this works.

What are P2P file transfers?

Peer to Peer (or P2P) networks have distributed architecture at their core. A group of peers is working with each other instead of a centralized server. So, considering the specific scenario of data or file transfer in a P2P setup, each peer (or participant) can act both as the source of data and the consumer of data.

We primarily use IPFS as the underlying transport for connectivity and transfer of data between peers.

Now, as discussed, multiple peers have access to a file in a P2P setup – the question that arises is, will we be able to make a targeted file transfer to a particular peer securely without missing out on the advantages offered by this architecture?? The answer we get is yes, by using encryption techniques on our files.

The encryption technique that we use is PGP.

What is PGP encryption?

Pretty Good Privacy (PGP) is an encryption program that provides us a way to encrypt any sensitive information.  

PGP is a good encryption technique to go with primarily due to 2 reasons. First, it uses a mix of both symmetric and asymmetric encryption. One only needs to share their cryptographic publicKey (from their cryptographic public/private pair generated by respective peers) and keep their privateKey secure without incurring enormous costs in terms of speed, which generally comes in such publicKey encryption. Second, it is an implementation by design that allows us to encrypt the same information, if required, to be targeted to multiple clients or peers at a time. This implementation lets us share a single encrypted message to be broadcasted across the network, and only the targeted ones can decrypt and make sense out of that data.

PGP encryption in action

For demonstration purposes, we will be proceeding with IPFS upload and download cli calls. (Find more information here – <https://docs.ipfs.io/>)  

We have divided the process into the following steps:

  1. Generating public/private key pair

It can be generated using some information that uniquely identifies a client. It is a one-time process and is also a simple one:  

The publicKey generated is the one that can be shared openly with others. And the privateKey needs to be kept safe and not shared with anyone. This one is used to decrypt the messages.

       const openpgp = require('openpgp'); // Importing openpgp module.  
       (async () => {  
           const { privateKeyArmored, publicKeyArmored } = await openpgp.generateKey({  
                userIds: [{ name: 'Cristopher', email: 'real.cristopher@hmail.com' }],  
               curve: 'ed25519', // ECC curve name  
               passphrase: 'verySecurePassword'                                         
           });  
           console.log(privateKeyArmored); // PGP privateKey  
           console.log(publicKeyArmored); // PGP publicKey  
       })();  
  1. Generating an encrypted message for a client X and writing it to a file
   const openpgp = require('openpgp'); // Importing openpgp module.  
   const fs = require('fs');  
   (async () => {  
       const publicKeysArmored = [  
           `-----BEGIN PGP PUBLIC KEY BLOCK-----  
           ...  
           -----END PGP PUBLIC KEY BLOCK-----`  
       ]; // An Array consisting of Target(s) PGP publicKey(s)   
       const data = "This data needs to be securely transported."; // Data to be encrypted.  
       let publicKeys = [];  
       for (let index = 0; index < publicKeysArmored.length; index += 1) {  
           [publicKeys[index]] = (await openpgp.key.readArmored(publicKeysArmored[index])).keys;  
       }  
       const { data: encrypted } = await openpgp.encrypt({ // getting the encrypted data.  
           message: openpgp.message.fromText(message),  
            publicKeys,  
       });  
       const encryptedData = Buffer.from(encrypted).toString('base64'); // converting encrypted data to base64  
        fs.writeFile('encryptedFile.txt', encryptedData, 'base64' , (err)=> { // writing enrypted data to a file  
           if(err) console.log(err);  
            console.log('Encrypted file is ready.');  
       });   
   })();     
  1. Uploading this file to ipfs
   // Considering an IPFS network is up and running at IPFS_URL.
            ipfs add encryptedFile.txt
   // This would return a hash say, QmWNj1pTSjbauDHpdyg5HQ26vYcNWnubg1JehmwAE9NnS9 for example.  
 

This hash can be sent over to the target nodes using blockchain private messaging.

  1. Downloading file from ipfs at client X’s end
      /* The hash obtained in step 2 has to used to download the file from ipfs. */    

        ipfs cat [[IPFS_HASH]] > encryptedMessage.txt"  
  1. Decrypting the message at client X’s end
   const openpgp = require('openpgp');  
   const fs = require('fs');  
   (async () => {  
       const privateKeyArmored = `-----BEGIN PGP PRIVATE KEY BLOCK-----  
       ...  
       -----END PGP PRIVATE KEY BLOCK-----`; // PGP privateKey of client.   
       const passPhrase = `verySecurePassword`; // passPhrase with which the PGP privateKey of client is encrypted with.  
        fs.readFile('encryptedMessage.txt', 'base64', (err, data)=>{  
           if(err) console.log(err);  
           else {  
               const { keys: [privateKey] } = await openpgp.key.readArmored(privateKeyArmored);  
               await privateKey.decrypt(passPhrase);  
               const encryptedDataInAscii = Buffer.from(data, 'base64').toString('ascii');  
               const { data: decrypted } = await openpgp.decrypt({  
                   message: await openpgp.message.readArmored(encryptedDataInAscii),                
                    privateKeys: (await openpgp.key.readArmored(privateKeyArmored)).keys,   
               });  
               console.log(decrypted); // The decrypted message.  
           }  
       })  
   })();  

Encryption at Skeps

We care about each bit of critical data flowing across our and our partners’ infrastructure. So, to become a secure haven, we have also identified and unleashed the benefits of blockchain private messaging, which allows us to share information with a specific target(s) without broadcasting it throughout our network, ensuring encrypted data reaches only rightful stakeholders. 

So, at Skeps, for sensitive information, we transmit encrypted content over the blockchain as private messages and for, non-sensitive information, we use the approach that has been discussed earlier, which is encrypted messages via IPFS.  

We also ensure that the passPhrases and privateKeys of each participant of our network are created and maintained with the latest and most secure algorithms/technologies.

Holiday Season Reality: What has changed this year?

BNPL FinTechs are well-positioned to enable a growing portion of consumer spending, during the holiday season.

BNPL FinTechs are very well-positioned to enable a growing portion of consumer spending, especially during the holiday season (Black Friday and Cyber Monday).

The biggest shopping carnival of the year—Black Friday to Cyber Monday—is around the corner. Although the COVID-19 pandemic has transformed many aspects of life, however, consumers and retailers are all geared up for the upcoming holiday season. Consumers have moved to online channels, while retailers are working on enhancing their digital capabilities. To put it precisely, we are all in store for a very different holiday shopping season. Before we dive deep into the details of what has and what will change this year, let us talk about some numbers that highlight a massive change.  

So, a Deloitte report shows that despite the massive unemployment rate in the U.S. and the worst economic slump since The Great Depression, holiday retail sales are still expected to increase, between a modest 1 percent and 1.5 percent. Early sales projections between November 2020 and January 2021 are expected to be around $1.15 billion. This year, we know customers want to avoid crowds, and therefore digital will play an even more crucial role in 2020’s holiday shopping landscape. A record number of shoppers, around 60 percent1— plan to shop online. Deloitte forecasts reveal that e-commerce sales are all set to grow by 25 percent to 35 percent, year over year, during the 2020–2021 holiday season — a leap from the 14.7 percent uptick in online holiday shopping from 2018 to 2019. This season, we can expect e-commerce holiday sales to generate between $182 and $196 million.  

All these estimates, coupled with more reasons, have prompted retailers to kick off sales as early as October. Let us see what these reasons might be.

Retailers kicked off sales as early as October

If you look at the data of Cyber Monday from 2019, the sales achieved a record $9.4 billion2 vs. $7.9 billion in 2018, and, in the same period, Black Friday witnessed sales of $7.4 billion vs. $6.2 billion, respectively. That might have prompted retailers to kick off sales as early as October3 in an effort to drive incremental sales volume leading into the holiday season. While retailers have undergone numerous preparations for the traditional holiday shopping season, the disruption done by the pandemic has put things into overdrive. As consumers begin to make their holiday wish lists, retailers have already planned changes to ensure they can deliver for the busy holiday shopping season.  

Imports reached an all-time high as retailers stocked up on inventory well ahead of schedule. We have also noticed that some have termed this holiday season shipaggedon. It seems retailers have planned for an extended shopping season. Around 69 percent of retail respondents surveyed by the National Retail Federation expected consumers to start their shopping in October this holiday season, and they were ready to meet this demand with seasonal inventory and promotions.

Retailers planning for early start to the holiday season

Also, for many retailers, this holiday season is vital as they are relying on Q4 to recoup losses and maintain 2020 sales targets. Also, their holiday season’s performance will serve as a litmus test for digital experiences moving forward. As 47% of global consumers4 are interested in shopping online for the holidays this year compared to last year, getting ahead of the curve becomes even more crucial.  

Aside from all this, in a global economic downturn, it will be interesting to observe how retailers live up to consumers’ expectations, especially as they shift from offline to online shopping.  

Retailers need to enhance the customer experience

As e-commerce demand and competition continue to rise, retailers need to be on top of customer experience to lure consumers to their stores for their holiday shopping. From seamless payment methods to the availability and delivery of stock, everything needs to be on point. In a scenario where holiday spending is expected to fall, given the dampened mood caused by the pandemic and resulting recession and job losses, retailers need to look into the source of funds that will be used to pay for holiday spending and open multiple payment options for the consumers.  

In terms of overall payments, the top positions were occupied by debit cards (35.7 percent) and credit cards (31.8 percent).

Funds that will be used to pay for holiday spending

“Consumer demand right now is for responsible credit options,” said PayPal’s Executive VP of Global Sales, Peggy Alford. “For them, buy now pay later (BNPL) [is] more than just credit – it’s the flexibility of payment. In these economically uncertain times, the desire for many people — especially our young folks — is to not go into debt, or risk it, but they do need spending flexibility.”  

For example, PayPal data says, 1 in 3 retailers are implementing cashless options in their stores, and retailers need to oblige and offer various payment options for a seamless experience. Used in more than 70% of all consumer transactions5 globally, local cards, e-wallets, bank transfers, and cash-based digital payments are the dominant payment methods. But, in a post-COVID scenario where consumers are experiencing a money crunch, retailers that enable payment flexibility will prepare themselves for lasting success, ensuring they are on the right side of this industry transformation.  

This holiday season seems to be serving as an e-commerce strategy template for years to follow. Retail is at a crossroads, and the impacts in the next few months will help chart the path for the future. The next few months will accelerate a digital arms race for retailers looking to develop the best possible e-commerce experiences for consumers.  

FinTechs well-positioned to drive holiday sales in 2020

While the pandemic is upending the status quo for many consumers, retailers, and lenders alike, one segment of FinTechs continues to witness growth amidst market uncertainty. As consumers and retailers are rapidly adopting BNPL financing options, it could be critical to sales this holiday season, for it seems to help offset some of the economic impacts of the pandemic.  

BNPL FinTechs are very well-positioned to enable a growing portion of consumer spending, especially during the holiday season (Black Friday and Cyber Monday). Consumers who are new to credit are adopting BNPL services. After raising substantial funding in 2020, BNPL FinTechs can give consumers the gift of greater financial health post-pandemic.  

This year, BNPL FinTechs fared well, and top players raised substantial capital to respond to growing consumer demand. Meanwhile, other FinTech lenders pulled back on originations and all reduced workforce to preserve operating capital. Also, reports show improved customer conversion rates and higher average order values for retailers offering installment payment options. Some merchants are witnessing a 20% lift in conversion rates and a 60% lift in average order values6. Given the pessimistic outlook for 2020 retail sales, these statistics seem very encouraging.  

Having released a new BNPL product recently, PayPal says that more flexible payment plans are also going to be critical to securing sales during a period of economic uncertainty. It also says 45 percent of merchants who already offer BNPL financing options could see holiday sales grow by at least 5 percent and that 42 percent of retailers say the additional payment option has combated shopping cart abandonment.

1 NRF: Holiday 2020 shopping starts now 
2 Statista: Thanksgiving weekend e-commerce sales 
3 CNBC: Black friday is over: Here’s why retailers are touting weeks of deals 
4 PPRO: Payment Service Providers 
5 Salesforce: 2020 retail holiday guide 
6 Fiserv: Retailers meet customer demand buy now pay later installment 

Achieve A Stable Platform With Testing Methodologies

In this article, we have discussed why testing is necessary, how testing methodologies work, and how Skeps performs these tests.

In this article, we have discussed why testing is necessary, how testing methodologies work, and how Skeps performs these tests.

The rationale for writing tests for a piece of code is that humans are bound to make mistakes in anything they do. Rather than denying the fact, we must use this to our advantage. We can talk about how this piece of software should behave or how it should not. Both of these paradigms of thinking are essential for a full testing system.

Positive cases: defining what system should be doing to mark it as working correctly.

Negative cases: defining what system should not do to mark it as working correctly.

Ways of testing

There are different ways of testing –

Manual testing – manually testing the APIs /modules which use other small modules in it. There are some cases in which automation testing is not possible because of some constraints and, manual testing is the only way in those cases. Automated testing – automation test cases are written once test cases are defined after manual testing. These automation test cases work as future checks which, will run every time a change is made in the system. Writing automation tests helps us skip the time-consuming manual tests made on every change.

Testing methodologies from different levels of abstraction

Testing every line of code – unit test & test-driven development

There is a methodology called test-driven development in which unit tests are written even before the actual piece of code it tests. The rationale behind this is we must start with a failing test case and write a code that passes that case. That way, it is ensured that the program is doing what it intended to do. To say that a system is rigorously tested, we need to know how much of our code is covered with test cases; this is called test coverage of the code. There are many frameworks in every language which give code coverage report based on unit tests written in the code.

Who makes, thou shalt test – Dev testing

This type of testing comes literally and metaphorically between the unit and integration testing in which the creator of the code tests test cases before launching a full test cycle. It ensures that only quality code is processed by further phases of testing and avoids silly mistakes taking the testing time.    

The sum of parts is not equal to the whole – Integration testing

To say in simple terms, if two systems are working stably independently, then it does not mean that a system consisting of interactions between these two systems can also be labeled as stable without testing it. Here, the integration testing methodology comes in to picture, in which rather than running a test on each module independently as in unit tests, the module is tested as a whole – its positive and negative behavior is defined and tested separately. 

Automating everything – API automation testing

In web-based scenarios, different microservices interact with one another via Http API. Ensuring the health and consistent behavior of these APIs is supercritical. That is why to cover testing from a microservice point of view, API automation testing is done. In this testing, each API exposed by a microservice is given a predefined set of inputs and is tested against predefined behavior it should make. This behavior can be the state of the datastores after API success and the response shared by the API. Since this type of testing can mutate the data in datastores, it is done in a sandboxed environment. Preferably, it is done in docker containers consisting of separate datastores. These dockers are created before test cases and torn down after test execution, thereby ensuring a repeatable testing environment independent of the datastore state left by previous testing runs. 

What shows matters – Web/app automation testing

In a web-driven world, users interact with a piece of software via some client, be it a browser or a mobile app. To ensure a consistent system from the user’s point of view, each of these clients must be tested on each build. This can be achieved via a mix of manual and automation testing methodology. First manual testing is done to record the ideal state, which should be shown to the customer. Then, once that is done, there are web/app testing automation frameworks like selenium/appium/puppeteer that are used to automate these user UI interactions. The results of this type of UI automation testing are either complete flow is tested, and the end state is recorded, or screenshots are taken at each stage and then analyzed either programmatically or manually. This method is still faster than complete manual testing in which a dedicated person tests every user journey by clicking and tapping on a web/mobile device. Also, with the advent of multiple browsers/multiple mobile phones of different screen sizes and operating systems automation, testing scales comparatively easier than manual testing though it also requires manual intervention but not as much as manual testing. 

Humans to rule them all – manual testing

No matter how much automation frameworks we use and automated test cases we write, but they are always going to remain constant. However, in dynamic scenarios, things change in ways that are incomprehensible to machines. Automated test cases are suitable for repetitive checks, but we should think about ways a system can break. Once this case is marked as a valid test case, then it can be automated for the next releases. 

Testing at Skeps

For API testing, we use the Rest Assured java library with the TestNG testing framework.

What is TestNG

TestNG is a framework designed inspired by JUnit and NUnit. It covers all types of tests: unit tests, functional, end-to-end, integration tests, etc.

A few of the advantages of using TestNG are as follows:

  1. It supports Annotations.
  2. You can run your tests in arbitrarily big thread pools with various policies available (all methods in their thread, one thread per test class, etc.).
  3. Supports ways to ensure testing that code is threaded safe.
  4. Flexible test configuration.
  5. Support for data-driven testing with @DataProvider.
  6. Support for parameters.

Another advantage of using TestNG is that it allows grouping test cases. At Skeps, we have more than 1000 active test cases that are run before every build. During development, running this large number of test cases is time-consuming and redundant. Here, the test case grouping allows developers to run test cases only for modules that are being touched by their code. It helps in a faster development cycle where complete test cases are run once development is done. 

Sample testNG configuration testng.xml which, intends to run only ABC group test cases, would look like

<?xml version="1.0" encoding="UTF-8"?> 
<!DOCTYPE suite SYSTEM "https://testng.org/testng-1.0.dtd"> 
<suite name="ABCFlowTestsSuite"> 
    <test thread-count="5" name="ABCFlowTests"> 
    <parameter name="IP" value="1.2.3.4" /> 
    <parameter name="DealerNumber" value="13" /> 
    <parameter name="BackendServerIP" value="55.55.33.44" /> 
    <parameter name="param1" value="124" /> 
    <groups> 
    <run> 
    <include name="ABCgroup" /> 
    </run> 
    </groups> 
    <classes> 
    <class name="com.skeps.tests.stableflow.ABC" /> 
    </classes> 
    </test> 
</suite>

What is Rest Assured

Rest assured is a library we use at Skeps to facilitate API automation testing. It provides boilerplate code for interacting with Http APIs.  

Including Rest Assured library is as simple as adding the following dependencies in your dependency manager config,

<dependency> 
    <groupId>io.rest-assured</groupId> 
    <artifactId>rest-assured</artifactId> 
    <version>LATEST</version> 
</dependency> 
<dependency> 
    <groupId>io.rest-assured</groupId> 
    <artifactId>json-path</artifactId> 
    <version>LATEST</version> 
</dependency>

For eg., checking response status from an API call is as simple as (illustrative code.)

import org.json.simple.parser.ParseException; 
import java.io.IOException; 
 
import io.restassured.mapper.ObjectMapperType; 
import io.restassured.response.Response; 
 
 
@BeforeMethod(groups = { "ABC1" }) 
public void setup() throws IOException, ParseException { 
 
 
   Response resp = SendRequestGetResponse.sendRequestgetResponse(RequestObj, RequestUrl); 
   resp  = resp.as(Response.class, ObjectMapperType.GSON); 
   Assert.assertEquals(response.getStatusCode(), HttpStatus.SC_OK); 
 
}

Coding, test cases assertion checks with this library is a breeze due to its human-readable syntax.

E.g., 

Assert.assertTrue(StatusCodelist.contains("001"), "Status code list does not have Status code 001");

Another good reason to use this library is that it’s readily integrated with testing frameworks like TestNG or JUnit.

How do we run tests periodically?

Tests are only helpful if they are run periodically to find the bug fast and early. We have various types of testing requirements, such as daily tests, weekly tests, etc. We use the Jenkins framework to schedule crons and also for running ad hoc crons. Once a test suite is run, its results are mailed to all dev team stakeholders.

Why we use Jenkins –

It helps streamline all crons in a single place rather than having multiple cron tabs on across various servers.  

At Skeps, Jenkins is run inside docker. Setting up your own Jenkins is as simple as running 

docker run -p 8080:8080 -p 50000:50000 -v /your/home:/var/jenkins_home jenkins 

You can read more about Jenkins athttps://www.jenkins.io/ 

How we setup our testing environment

To test our systems, databases like SQL, redis, etc., are required along with code. For provisioning this, we could have created a separate isolated environment of servers which, could have achieved our purpose satisfactorily. But we took one step extra in creating a docker-compose-environment for our testing infra. It enables each of our developers as well as the testing team to spawn testing setup on their own systems and tear it down in no time. 

Typical docker-compose for a simple system consisting of REST API application and multiple databases is like –

Docker-compose.yml 
 
version: '2' 
services: 
    api_service: 
    environment: 
    LOG_LEVEL: debug 
    image: <your application image > 
    ports: 
    - 80:80 
    links: 
    - database 
    - redis 
    depends_on: 
    - database 
    - redis 
    database: 
    image: mysql 
    ports: 
    - 3306:3306 
    redis: 
    image: redis 
    ports: 
    - 6379:6379

Above mentioned docker-compose would spawn an API service of the given docker file along with MySQL database and Redis. The API service would be dependent on MySQL and Redis. Spawning our test infra requires a lot more services but is clearly straightforward.

Test Reports

Tests are as good as the action that is taken on their failure. So to keep everyone apprised of the testing status. We mail the periodic test reports to our dev teams and managers in a very concise and crisp format. This helps in easily analyzing reports.