9 Lessons Learned from Building 60 Data Source Integrations 

Gone are the days when developers had to code every aspect of their product from scratch. Today, a cacophony of databases and APIs exist with the explicit purpose of enabling developers to build upon existing frameworks and stacks. But like in any menagerie some birds squawk and squeal while others sing in perfect tune.Over the past months our team at Panoply.io has implemented over 60 data source integrations to our platform. To achieve this, we developed a data extraction framework designed to handle different implementations of data sources that could easily integrate any future data sources with a few hours of coding. The foremost challenge we faced was how to make this layer robust enough to survive changes over time, as well as feature and version fragmentation without kicking off an endless maintenance spiral.Data sources are instruments in the orchestra of your destination database. In our case the destination database is a data warehouse but this holds true for any product or service pulling data from external sources. Travel startups like Freebird are perfect examples of services dependent on external data. For products to deliver this data, orchestras need to be conducted. As we’ve done this more than once, we’ve picked up a few best-practices for writing such integrations, fast and reliably. I will outline 9 of the most common lessons here.*Before we begin, please note that I will not be discussing obvious engineering best practices, such as writing tests, reading docs or building generic code. Also, I will not discuss the underlying architecture of such a framework or the ginormous important task of securing and encrypting the data. The last two are articles in their own right.

Source: 9 Lessons Learned from Building 60 Data Source Integrations | ProgrammableWeb

Be Sociable, Share!

Leave a Reply

Your email address will not be published. Required fields are marked *