Code Europe 2017 has come to a pass. It was really fun, there were many interesting talks about all the possible stuff, from Big Data, through Microservices and Serverless, to WebVR, React and all other frontend stuff. Here are few topics that I found interesting.
RSocket (Reactive Socket)
As the name suggests, it is a socket, a reactive socket. But what this means? It is a protocol, developed by Netflix, with a goal of simplifying the communication between microservices. The “Reactive” part comes from the Reactive Streams specification, which RSocket fully supports. The idea behind this project is that instead of standard HTTP protocol communication, which involves a lot of connections, handshakes and all other stuff that is not so useful for communication between application component, it uses a single connection, through which messages are passed in one of few possible ways as streams of data.
Here are the possibilities:
- request/response – it mimics the standard HTTP REST approach, with one object as a request, and one element stream as a response
- request/stream – for one request, as a response we get a continuous stream od data
- fire-and-forget – this is just a request, with no response, we just send a message and do not care about the response, for example, some kind of notification
- event subscription – this is similar to some kind of queue or topic subscription of some kind o message broker, here we just get a stream of incoming data
- channel – this is a bidirectional stream, or I might say, two streams that can pass data both ways at the same time
There was a talk that about reactive applications and systems, and in this context, Vert.x was shown as a possible solution for implementing such a system. I didn’t know vert.x before, and I must say, that I still do not know it and it is a bit hard to write about it, but I’ll try a bit. Vert.x is a framework, a framework that offers an abstraction layer about many different technologies, that involve communication between components, connecting to different third party services, connecting to databases, serving REST services, consuming REST, well, it looks like it can do anything. And this is impressive. It reminds me Spring and all other spring projects.
The only downside of vert.x, that I could think about after reviewing some examples, is that it looks like a callback hell most of the time. It looks like that only small set of modules support reactive approach, and everything else is made with callbacks, which is not so fun. But at least, it is designed with asynchronous communication in mind, which is a big plus.
GraphQL is a query language designed by Facebook with a goal of solving strict REST APIs response formats. With REST you define what client can get, and will always get, no matter if it wants it or not, he will get the full package. So if a client wants for example author of a book, a title and genre only, he can’t, he has to get a list of chapters, price, ISBN number, short description, everything – because of that how this REST endpoint works.
With GraphQL client specifies what he needs, what fields, what relations, with an ability to define some sort of where clause on relations, so that he will get only this what he needs, and nothing more. GraphQL endpoint offers two operations, Query and Mutation. As the name suggests, Query lets you query for specific data, and Mutation lets you edit some data. On top of that, there is a typed schema, defining an endpoint that you can also read as a client. All this makes it really powerful tool, that in the long run, simplifies your API a lot, as you don’t have to think what client may need.
The interesting thing is that GitHub offers a GraphQL endpoint, with graphical user interface that lets you query repositories data. You can easily learn and play with it there. It has code completion in the query editor and full schema documentation available. Go there and just play with it 🙂
More information about GraphQL can be found on the project website.
I had no idea what it was when I went to this talk, all I knew was that it had something to do with microservices. So, what is this? it is a subset of JEE specification, that is slimmed down to the required minimum for microservices. Current version consists of JAX-RS, CDI and JSON-P specifications. Specification as slim as this allows for really small (file size wise) services, that can be easily stored in repositories as for example docker images, that are based on some kind of another docker image which provides a server implementing the spec. Currently, I know about four vendors that provide server distributions and Docker images that support Microprofile:
It is a really interesting concept, and in my opinion, it is a step in right direction, as we often do not need all the scary enterprise stuff of full-blown application servers. I will probably still stick with Spring Boot stack, but it is good to know that things on the JEE side are moving too. More information can be found here.
Servlerless with Firebase and webtask.io
Sereverless is the next big buzzword after Microservices. Now, istead of many small appllications deployed in containers, we will just have some services from providers that we can use! Firebase is such a service, it is a BaaS (Backend as a Service) with some FaaS (Function as a Service) capabilities. Ok, but what this means? Firebase is a Google service that offers set of services that can be used as a backend for most of the client applications. Its features are:
- Authentication and authorization support using Twitter, Facebook or Google OAuth2 services, or simple login/password setup.
- Real-time JSON object database that can push the data changes to the clients as they happen
- Cloud messaging for pushing notifications to clients
- Cloud Functions (FaaS) with which you can implement some complex logic that can use some other third party services
- Cloud Storage for storing all the other data
- Hosting of website, for example, an Angular client app
- And few more including Ad support, configuration in the cloud, analytics and more
It’s hard to write about it in few words, it needs whole other post, but during a talk, a simple, complete app has been written in few minutes. It is really simple 🙂
Both solutions have a pretty generous free tier, that allows for publishing application, with flat and pay as you go tiers available for later when you will need it.
Log management with ELK stack and Kafka
Most of you probably know what ELK stack is, but some may not. ELK stack consists of Elasticsearch, Logstash and Kibana. And in a nutshell, its job is to gather, index and present the logs of your application in a useful form. It consists of three components, each with a different job:
- Logstash – it parses logs, converts them to some useful form, that can be later indexed, grouped, tagged etc.
- Elasticsearch – here are all your logs stored and indexed
- Kibana – this is a frontend, that lets you browse your logs, create different views, search through them etc.
The basic idea is that you make Logstash tail all you log files, read and parse them. Later, this parsed data is pushed to the Elasticsearch, where the data is stored and indexed. Now, when all is stored, indexed and beautiful you can browse your data with Kibana. Sounds great.
But the problem comes when you have microservices. Imagine that you have 50 services in total. Multiple instances of some. Some are dynamically created and killed etc. This means that you have a lot of log files. Some of which may not even exist when you configure Logstash.
One solution may be to use logback with Logstash connector, that just pushes the logs through TCP or UDP to the Logstash service. But this has a drawback, you create a single point of failure. If you have many services, that can produce thousands of log entries a second, you will just kill your poor Logstash service. It won’t be able to process all this data effectively. To fix this, you can use Kafka. Instead of having logback pushing logs to the Logstash directly, you can have it push log entries to Kafka. Kafka has persistent storage, so it can store all your logs for specified time, for example, few days. And later have Logstash get all this data at a pace that won’t kill it. Since Kafka can be clustered, you can easily scale this setup to keep up with more and more logs produced by your services.
It is a really nice setup that I will have to try some day 🙂
Also published on Medium.