Extending .z.ts to execute multiple functions at different intervals

This is a guest post by Mark Street. If you like it be sure to check out his other posts, or find him on LinkedIn. If you are interested in being a guest blogger on enlist[q], please contact me.

As we previously learnt, q/kdb+ has a callback function .z.ts which fires an event every x milliseconds, where x is the precision applied to the time via the \t command.

q)\t 100 / set .z.ts to fire every 100 milliseconds

This works just fine if we want to fire a single function at the same period (e.g. when running the TickerPlant in batch mode), but leaves us a little stuck if we want to execute different functions at different intervals.

An approach to solving this, is to create and maintain a list of functions we want to call, along with the interval that they should be triggered, and, when the .z.ts callback is fired, trigger the relevant ones. We will refer to this combination of a function and interval as a ‘job’.

Continue reading “Extending .z.ts to execute multiple functions at different intervals”

Using the timer function .z.ts in q/kdb+

Note: You can now subscribe to my blog updates here to receive latest updates.

q/kdb+ has several pre-defined functions in the dot z namespace such as .z.a to get the IP address, .z.h to get the host, and .z.b to get the dependencies. Today, we will discuss a callback function in the dot z namespace called .z.ts.

.z.ts is a very simple timer function that is invoked at regular pre-configured intervals. The interval is set by \t command and is set to 0 by default.

For example:

q)\t
0i

To invoke a function regularly, you need to set the timer and define the .z.ts function. For example, if I want to print the current timestamp every 5 seconds, here is how I would do that:

First, set the timer using milliseconds:

q)\t 5000
Continue reading “Using the timer function .z.ts in q/kdb+”

Enabling a modern hybrid cloud kdb+ stack with PubSub+

Note: You can now subscribe to my blog updates here to receive latest updates.

Things used to be much simpler just a few years ago when you had all your applications deployed in your on-prem datacenter. Sure, you had to manage all that yourself, but it was easy to deploy your applications on your finite number of servers. Things are much different now. Cloud computing has really taken off and you don’t have to worry about managing your own datacenter anymore, at least not to the extent you used to earlier. Many companies, especially startups, have decided to embrace the cloud fully. However, if you are a large enterprise, you still have your on-prem datacenter for your critical applications managing sensitive data, but everything else has either already migrated to the cloud or is in the process of.

Similarly, your kdb+ stack used to be fully on-prem, running on multiple powerful servers spread across the world to capture market data globally. But slowly, you are realizing that maybe there is an alternate way to manage your kdb+ stack. Maybe not all components of your kdb+ stack need to be on-prem. Maybe other applications in your organization might benefit from having access to the data in your kdb+ database.

However, there is a problem. Not only has your kdb+ stack evolved, but other application stacks have also evolved over time and are now flexibly deployed on-prem, or in a hybrid/multi-cloud setup. How do you manage data transfer between your q applications running locally on-prem and on the public cloud? How do you then make this data available to other applications in hybrid/multi-cloud?

I told you life was much simpler before.

Continue reading “Enabling a modern hybrid cloud kdb+ stack with PubSub+”

Publishing and Consuming messages from PubSub+ in a q/kdb+ stats process

Note: You can now subscribe to my blog updates here to receive latest updates.

This post is the second post in a series of posts I have written as part of a data analytics pipeline spanning multiple languages, databases, and environments. You can find more about the pipeline in my final post here.

A typical kdb+ architecture consists of several q processes tightly coupled together and maybe a java feed handler. The feed handler is responsible for capturing market data and sending it to a ticker plant which then routes it to different real-time subscribers. The most popular real-time subscriber is RDB (realtime database) which keeps the raw real-time data in memory and then persists it to disk at the end of the day.

Clomid is popular because it helps a woman’s body to start to ovulate cheap cialis from india properly. When he is not able to keep your body in a rx sildenafil healthy state. But it can revive any time in these 6 hours https://unica-web.com/archive/2007/films-on-the-dvd-collection.pdf canadian cialis online duration but not the erection will remain for 6 hours. Most of the countries require prescription if anyone wishes to use kamagra products and therefore it is important to be prepared for its side effect. viagra wholesale uk , a drug from Pfizer Labs, has been touted as the “levitra”.

The second most popular real-time subscriber is usually a bar generation process that also subscribes to real-time updates, just like the RDB, from the ticker plant. However, instead of saving the raw updates, this process computes real-time analytics. These stats are usually computed every minute but can differ depending on the individual use case. This bar generation process either persists the data to disk and/or sends it off to another process interested in this data.

In this post, I would like to show you how your bar generation process can consume streaming data from Solace’s PubSub+ event broker, generate minutely statistics, and then publish those stats back to PubSub+ on dynamic topics.

You can find the code of the q process here.

Continue reading “Publishing and Consuming messages from PubSub+ in a q/kdb+ stats process”

Bringing the power of pub/sub messaging to kdb+

Note: You can now subscribe to my blog updates here to receive latest updates.

A typical kdb+ architecture (in a market data environment) is composed of multiple q processes sharing data with each other. These processes usually are: feed handlers, ticker plants, real-time subscribers (rdbs, pdbs, bars etc), historical databases (hdbs) and gateways.

Here is what the architecture looks like:

When it inhibits the PDE-5, horny goat weed benefits have been long known to Traditional http://www.unica-web.com/whatisunica_revised.htm viagra 100 mg Chinese Medicine. Men need regularly consult their doctor to discuss their options to treat erectile on line viagra dysfunction in men. The courses are creatively designed viagra 25 mg you can find out more by expert technical wizards and driving instructors and they teach students basics of driving and its intricacies. Both the teams deserve a place click that viagra shop uk in the lack of any lump or ailment.

While this is a popular architecture which has been deployed in many production environments, it does come with some challenges:

  • lack of pub/sub messaging – there is no native pub/sub messaging pattern built into kdb+. While q processes can publish data to multiple other q processes, this isn’t really pub/sub where you want to publish the data only once. This becomes more important as you scale and add more consumers.
  • slow consumers – if one of the downstream applications is unable to handle data load, it can negatively impact your ticker plant and other downstream processes.
  • data resiliency – what happens if the bar stats process in the architecture above crashes due to heavy load or network/equipment failure? How will your applications failover? Will the data in-flight be lost?
  • sharing data with other teams – while kdb+ does have APIs available for different languages, kdb+ developers still have to implement and support these individual APIs internally.
  • tightly coupled – the above architecture is very tightly coupled where it can be difficult to modify and deploy one process without impacting all others.
  • cloud migration – as companies expand to the cloud, there is a need to transmit data securely to processes (written in q or another language) in the cloud.
  • lack of guaranteed delivery – when dealing with critical data such as order/execution flow, you want to guarantee that the published data was consumed by interested consumers.
Continue reading “Bringing the power of pub/sub messaging to kdb+”

q/kdb+ API for getting data from IEX Cloud

Last year, I wrote a q/kdb+ and python API for getting data from IEX. IEX, Investors Exchange, is an American exchange which provides a lot of financial data for free. You can get access to more data by getting a paid subscription.

Last year’s API was based on IEX’s v1 API which has now been retired. They have moved on to a new platform called IEX Cloud and started capping the amount of data you can access monthly in free tier and the type of data as well. On the plus side, IEX Cloud seems to be faster and more stable. There is also crypto data available now if you are interested. They have a nice management platform where you can see your usage and get access to other resources.

I decided to create another q/kdb+ wrapper around IEX Cloud’s REST API to make sure you are able to get data from IEX natively in kdb+. At first, I was thinking of using this great python API called iexfinance but decided to write something myself natively and mirror it as close to IEX’s REST API as possible.

You can find the new wrapper on github.

Continue reading “q/kdb+ API for getting data from IEX Cloud”

What exactly are keyed tables?

Note: You can now subscribe to our YouTube video channel for q/kdb+ tutorials.

Well, I will tell you what they are not – tables! If you take away one thing from this post, it’s that keyed tables are poorly named and are not tables. However, after reading this post, you should have a better understanding of why they are named keyed tables.

Reviewing tables and dictionaries

To really understand keyed tables, we first need to review dictionaries and tables. Recall that dictionaries are simply mappings between keys and values made up of lists.

Continue reading “What exactly are keyed tables?”

Understanding type casting in q

This post is a follow up to the previous post about datatypes in q. In that post, I went over all the datatypes available in q and how to identify them using character, numeric, or symbol representation. Now, we are ready to discuss another important topic in q: type casting.

This post is also available as a video on our YouTube channel:

Continue reading “Understanding type casting in q”

Datatypes in q

In this post, we are going to cover all the datatypes available in the programming language, q. These datatypes are known as atoms in q because you cannot reduce them to a smaller datatype. These datatypes are also referred to as nouns and we will discuss why that is in another post when we discuss q grammar.

This post is available as a video on our YouTube Channel.


Here is a list of q datatypes taken from Kx’s reference page.

Continue reading “Datatypes in q”

Video: Installing q/kdb+ using conda

After giving it a lot of thought, I have decided to start publishing video tutorials. While I love to write, I think there is definitely additional value to teaching your audience how to do something by showing it to them.

In my first video, I will show you how to install q and kdb+ using conda and also provide you with a brief introduction to what conda is and why it’s nice to be able to use it to install kdb+. This video is based on an earlier post of mine.

The drugs work by dilating viagra for sale australia the blood vessels. Since the cost was levitra tablet this generic treatment was dramatically low, so healthcare providers began recommending the treatment for their patients and help to live a healthy sexual treatment. Its important to recognize that there canada viagra prescription is both good stress and bad stress. Even though sometimes it becomes difficult to cialis for cheap price conceive after the age of 30. If you like the video, don’t forget to subscribe to my YouTube channel!