One of the immediate goals of the Yupana project is to go beyond an academic exercise and prove the conceptual viability of Agent Based Modeling approach in a real-world market simulation and predictions application. The crypto-asset ecosystem, and Nakamoto Terminal platform, specifically, provide a perfect testbed thanks to a vast amount of time-stamped data, an active developer community, and a scalable infrastructure with access to multiple 3rd party services.
We originally conceived of Yupana as a standalone module built in Java and integrated with NTerminal through Splunk REST API. However, after shifting the focus of the early stages of the project to time-to-MVP and access to the largest possible dataset, we realized that we can benefit from close integration with NTerminal and piggybacking off of its existing infrastructure. This realisation came from one simple fact: the core of the Yupana project is a collection of time-stamped data points that describe the agent state, which sounds very much like log management systems used in IT infrastructure. Instead of bolting such a system to a custom Java project build around one of the existing Agent Based Modeling frameworks, we decided to take advantage of the much more successful Security Information and Event Management projects with their vibrant community of IT security and infrastructure engineers.
At the heart of NTerminal sits a highly-scalable Splunk platform that collects, processes, and stores all customer-facing data. By using Splunk’s search request engine as an abstraction layer, any customer can obtain a state of any cryptoasset-related agent or environmental variables, as well as their rate of change. This is precisely the kind of information that will be exchanged within Yupana. Beyond just the storage functionality, Splunk can implement agent functions through scheduled searches, data models, and metrics, making it a turing-complete system before we even get to the SDK layer that enables us to expand functionality in Java and other languages without leaving the Splunk servers. In all of the above scenarios, Splunk’s internal language would act as an abstraction layer between Yupana modules and the raw time stamped events, calculating responses about unique agent states on the fly. Instead of keeping a unique key-value state for each agent and updating it every time there is change in the system, we maintain an unbounded, continuously updating data set that can be used to derive required values at any moment using Splunk searches.
To illustrate this, we can look at the communication between hypothetical modules that collect, process, and simulate Twitter users behavior:
In the diagram above, event data is integrated via an API sourcing module and then passed into Splunk in the form of time stamped events. The stored data can undergo a number of functions allowing for enrichment and restructuring of data around agents within a network. Aggregated agent data will organize information for high level analytics and client-side consumption. The continuous nature of Yupana functions and searches within Splunk will ensure the relevancy of data presented to the user.
Using Splunk enables us to take full advantage of their Splunkbase app store, which can provide us not only with a distribution channel, but also with a community of big data engineers that already have experience running industrial-scale systems. The diversity of companies using Splunk will also help Yupana propagate beyond cryptocurrency applications into other industries that might want to take advantage of the agent-based modeling.
Attaching Yupana to Nakamoto Terminal will also make it inherit its limitations, such as latency of the Splunk database abstraction layer and vendor lock. Acknowledging these limitations early on and taking steps toward diversifying our codebase through Splunk REST API, Java SDK, and NTerminal should help us address this in future iterations of Yupana. We also believe that taking these calculated risks and making early versions of Yupana dependent on existing successful projects and project communities will help us take it beyond what is being achieved today by various Agent Based Modeling projects.