As stated in the previous post, Yupana - Global Vision, conventional modeling techniques for simulating complex environments have relied on a set of fixed rules to govern how the model is to function. Agent based models (ABM) have been used for this purpose, to represent complex systems such as those in evolutionary biology or economics. While methods of ABM have been effective for specific use-cases, they fail to be generalizable and scalable over multiple systems. Popular heuristics such as machine learning (ML) and neural networks (NN) are used for defining correlations among data and producing robust pattern detection; yet alone, these methodologies cannot model complex systems. Yupana proposes to apply the techniques of ABM and heuristics in an adaptive architecture designed for continuous integration with real world data streams. The vision is to produce a model which can recognize patterns, apply rules, and readapt to novel stimuli.
Yupana’s main architecture is based on an Agent Based Modeling framework, which allows for discrete data agents to interact with each other and the environment. One key distinction to be made, is that with typical ABM models the rules of the environment and initial data conditions are predetermined and static. With Yupana, the rules of the system are not known beforehand and therefore will be determined as a function of the models ability to contextualize independent streams of real world data. This requires Yupana to be a continuous ABM model, where as new data is produced, it will be sent into the simulation to affirm the predictions of agents, or readjust them. This way, the rules can start as fuzzy representations, with gradual refinement over time. Deploying the Yupana model will not require for the full system to be operational, which will allow for new data sources, environmental variables, and agents to be integrated on a rolling basis. In order to reduce the pre-determined nature of conventional frameworks, the model will simply monitor the relationships between agents, with analytical tools build externally for abstracting use case implementations. By separating the visualization, arithmetic, and aggregative mechanisms from Yupana, the model can achieve an adaptive and scalable structure.
Yupana agents are unique entities who hold a one-to-one relationship with a piece of the modeled environment. Activity in the environment will update the corresponding agent in the model. Similar to the agents found in conventional ABMs, these agents will be discrete, allowing for unique principles to emerge among each individual. Combining multiple self autonomous agents into one environment will allow for collaboration between different agents and the overlapping communities they belong to. Agents with commonalities form functional clusters called agent group nodes which communicate to form an additional network layer. Cooperative, antagonistic, time-dependent, and feedback behavior will be observed in networks of interrelated agents. Integrating classification algorithms and computational voting techniques, insights can be abstracted from the model. Utilizing reinforcement learning and anomaly detection the external environment can be used to refine model insights by assessing the validity of observed behaviors.
The model environment refers to the set of agents and algorithms needed for representing and distributing information across the model. The initial conditions of this environment will consist of agents which contain two points of reference: to the external world via data streams monitoring real time updates, and other agents to which they hold relationships to. Algorithms, which contain heuristic methods such as ML or NN, will allow agents to communicate with one another and the external environment. As agents are regularly updated from their real world reference point, also known as a receptive field, heuristic algorithms will be reinforced for accurate forms of communication between the external environment and other agents. This will allow Yupana to tune algorithms towards recognizing the unique relationships between an agent and their reference points. We refer to these relationships (agent to environment, agent to agent) as the rules of the system which will be learned in a training phase and continuously tuned as more data is produced externally and integrated.
Running the Model
Yupana is designed to have a continuous relationship with the real world. This means that training the model and running the model will be very similar functions, as Yupana will be constantly refining itself. New inputs help to fine tune the algorithms that allow agents to represent their environment and send relevant input to related agents. The training protocol will implement this structure as well, where historical data will be fed sequentially into the model at an accelerated pace to catch agents up to current data in their environment. The model will then depend on references to the external world to integrate new activity in the model. When an agent is presented with an update in their external environment, they propagate that signal to related agents. The change in state induced on the related agent will be checked against any updates within their respective external environment. The continuous nature of Yupana’s data integration allows for the model to check it’s predictions and behaviors against real world activity, thus refining it’s activity to be in tune with external environments.
Nakamoto Terminal Integration
Nakamoto Terminal CDC Diagram depicting Yupana Integration
Yupana is currently supplementing Nakamoto Terminal (NTerminal), a modular and flexible data aggregation and analysis platform. Its job is to provide unique insights in the digital currency space through consuming real-time financial, blockchain, and natural language data. NTerminal’s information pipeline is comprised of a set of modules centered around a message broker, which is responsible for retrieving, converting, organizing, and delivering data.
The first step of the information pipeline is data gathering and normalisation, which is handled by the information source modules. They are named after and follow the logic of source components within the Spring Framework, which NTerminal’s information pipeline is based on. One of the major information subscribers following source integration is a group of natural language and heuristic processors that handle social and traditional media, financial regulator announcements, messaging platforms, and other sources of human-generated text. By leveraging technologies, such as named entity extraction, optical character recognition, image annotation, and sentiment analysis, they turn documents, images, and audio information into meaningful data streams that will later be fed into the Yupana module as the external environment.
While much of the information collected by NTerminal is available on the Splunk platform through REST APIs, as well as via flat file format, Yupana components also have access to 2 additional databases for information sharing. One is based on Elasticsearch and is used for storing processed text. This allows NTerminal modules to offload resource intensive keyword lookups and context extractions to the database itself. The other storage component is a classic relational database based on Postgres. NTerminal pipeline components mostly use it to store and share configuration settings, such as information resources to crawl, message routing, and running schedules. This data pipeline is referred to as the NTerminal Content Delivery Chain (CDC). The Yupana module will consume and allocate data from the CDC for the modeling of the cryptofinancial system.
Comments or questions? Join the discussion at BlockShop!