Robust API communication with exponential backoff

Every API fails at random points of time and that’s unavoidable. Sadly, it’s not taken care correctly during integrating third party API’s. I see it very often. „Hey there! But I’m using try-catch and handle errors, sometimes even I log them to the file…” one might say. Well, so what? What happens when it fails and you miss data which needed to be fetched during the daily ETL process? Or your business partner misses information if you send data to their API and for some reason, it fails. What then? As long as you use cron and have output emailed to some mailbox, which is being monitored – you’ll notice. Maybe you use Sentry or any other application monitoring/error tracking software and you’ll spot some anomaly. But imagine having dozens of such jobs running on a daily basis – it’s easy to lose track.

I think you get my point now. API errors occur quite often. Most of them are due to temporary service unavailability, caused mainly by having too much traffic at the moment. The simple solution is to retry. In this post, I’ll show how to easily implement efficient retry mechanism.

Read more

Google BigQuery – querying repeated fields

Google BigQuery is probably one of the best data warehouses in the market nowadays. It dominated Big Data landscape with its infinite scaling capabilities (querying over petabytes of data), ANSI SQL support and ease of use. It has proven its worth in many use cases.

One of the least used and least appreciated features, in my opinion, is repeated fields. The name doesn’t indicate well enough the intention, so for a sake of simplicity please consider it as an array field or nested field. You can define any structure inside the repeated field you like, leveraging types of columns which regular columns can be. The important part is to set mode REPEATED for the field of type RECORD.

Read more

Quickly ingest initial data to Redis

Imagine, you have massive data pipeline and, where thousands of requests per seconds needs to read (that’s easy) or write (that’s harder) data. The obvious and often right choice would be to use Redis to handle all that.

But what happens when you start it on production and need to have some historical data, in order to keep consistency? Of course – there is a need to import that. There are many ways to achieve that, including writing some custom script. I urge you to have a look at redis --pipe option, also called Redis Mass Insertion, where you can leverage Redis’ protocol in order to really quickly ingest a lot of data (way faster than writing a custom script to migrate data using Redis SDK).

Read more

Commarize – publicly available and open-sourced

TL;DR: commarize.com changes multi-line input into the comma-separated output.

Full story: around 6 years ago I created a simple tool to speed up my daily job. The problem was – our Affiliate Manager has been giving me excel file with one column – the IDs of customers to change their affiliate association in the database. There was a simple query behind it:

UPDATE clients
SET affiliate_id = 100001
WHERE id IN (<here goes comma separated list of clients>);

Of course, you can do that somehow in Excel. I often pasted that column to VIM and put the commas using a macro. But that was becoming a hassle, when I’ve been asked a couple times per week, sometimes a day.

I decided to create a simple tool, which looked ugly, but worked just fine.

Read more

Producing AVRO messages with PHP for Kafka Connect

Apache Kafka has became an obvious choice and industry standard for data streaming. When streaming large amounts of data it’s often reasonable to use AVRO format, which has at least three advantages:

  • it’s one of most size efficient (compared to JSON, protobuf, or parquet); AVRO serialized payload can be 10 times smaller than the JSON equivalent,
  • enforces usage of a schema,
  • works out of the box with Kafka Connect (it’s a requirement if you’d like to use BigQuery sink connector).

Let’s see how to send data to Kafka in AVRO format from PHP producer, so that Kafka Connect can parse it and put data to sink.

Read more

Real-time big data processing with Spark Streaming

Big Data is a trending topic in the IT sector and has been for quite some time. Nowadays vast amounts of data are being produced, especially by web applications, HTTP logs, or Internet of Things devices.

For such volumes, traditional tools like Relational Database Management Systems are no longer suitable. Terabytes or even petabytes are quite common numbers in big data context, which is definitely not the capacity that MySQL, PostgreSQL, or any other database can pick up.

To harness huge amounts of data, Apache Hadoop would generally be the first and natural choice, and it’s probably right, with one assumption: Apache Hadoop is a great tool for batch processing. It proved to be extremely successful for many companies, such as Spotify. Their recommendations, radio, playlist workloads, etc. are suitable for batch processing. However, it has one downside – you need to wait for your turn. It usually takes about one day to process everything, scheduled accordingly and executed in a fail-over manner.

But what if we don’t want or can’t wait?

Read more

Type Hinting is important

One of my favorite PHP interview questions, is: what is Type Hinting and why it’s important? Putting definition in one sentence, Type Hinting is a way to define type of parameter in function signature and it’s a sine qua non to leverage polymorphism. Because of dynamic typing in PHP, parameters don’t need to have type used. Also, by type here, I mean complex types (class, abstract class, interface, array, closure), not primitives like integer or double.

Read more

Immutable value objects in PHP

Value objects are one of building blocks in Domain Driven Design. They represents a value and does not have an identity. That said, two value objects are equal if their values are equal.

Other important feature is that Value Objects are immutable, i.e. they can not be modified after creation. Only valid way to create Value Object is to pass all required informations to constructor (and should be validated somewhere there). No setter methods should take place.

Read more

Software developers care too much about tools

Lately I see perilous situation in software development area. There are plenty of good devs so much bounded to tools. By tools, I mean mostly frameworks. I would like to elaborate a bit about that, but those are my personal opinions and they aren’t here to offend anyone.

First of all, we all need to admit, that quality of modern MVC framework raised a lot, comparing with state of things few years ago. Speaking about PHP – at the time, when I attracted my attention to this language, there were pure wilderness. We did not have any strong framework (unlike Ruby On Rails, which were sine qua non choice for Ruby web development). That caused multiple projects development, some of them are dead now (or should be), some hasn’t got good market adaptation and some of them are industry leaders at the moment (Symfony and Zend).

Read more

Testing in isolation with Symfony2 and WebTestCase

It’s extremely important to have same state of the System Under Test. In most of the cases it will be possible by having same contents in a database for every test. I’ve decribed how to achieve it in Fully isolated tests in Symfony2 blog post about two years ago (btw. it’s most popular post on this blog). It was the time, when PHP’s Traits weren’t that popular.

Read more
older