Architecture

The high-level system architecture comprises the following primary components and modules:
Task
a task is created and published by a Task Publisher each time they wish to initiate a new federated training session (which can be continuous) to train a new model or improve an existing one. The Task Configuration includes parameters such as:
Model type, settings, and configurations, such as parameters, hyperparameters, model evaluations, epochs, etc.
Trainers’ device requirements and specifications (mobile, laptop, IoT, and technical requirements etc.)
Access – defining who can be a Task Trainer and Task Aggregator, whether it is a public or a private training, etc. For instance, a Task Publisher can have an allowlist or blocklist of trainers, or allow only premium users to become Trainers, etc.
Number of federated learning rounds, Sensitivity/Privacy parameters like privacy budget, Minimum acceptable accuracy, Minimum number of Trainers, Application-specific parameters, Time requirements, Genesis model (if applicable), Rewards to participants / training budget.
Smart Contracts
Task Manager Smart Contract: This contract manages the overall procedure of the training process, from task submission by the Task Publishers, registration of Task Trainers and Aggregators, to managing the Task according to its configurations.
Reputation Smart Contract: This contract implements a reputation mechanism for the participants based on their contribution and behavior in the network. The reputation of Trainers and Aggregators is calculated based on their contribution and legitimacy (evaluation procedures are detailed later in this paper in the Process section) and is stored on-chain. This reputation directly impacts the rewards or penalties of Trainers and Aggregators, and it allows Task Publishers to better select Trainers for subsequent training batches, configuring a minimum threshold of reputation level if necessary. Furthermore, the reputation is shared across the entire network, enabling other Task Publishers to select Trainers based on their network-wide reputation and past performance training models from other Task Publishers.
Aggregation Smart Contract: This contract automates the aggregation of model updates. When Trainers submit a threshold of updates, it automatically selects the Aggregator(s) who will participate in the aggregation step based on specific attributes. It also manages the on-chain identifiers of the model updates uploaded to the decentralized data storage.
Incentive Mechanism Smart Contract: The network’s native token that rewards the network participants i.e., Trainers and Aggregators, based on their reputation and contribution, funded by the Task Publisher.
Training Agent
This user interface module runs on the user device (or any other end device relevant to the training process). Depending on the use case and type of users, different Training Agents should be deployed, ranging from lightweight PC software, mobile apps, to browser plugins. The goals of the Training Agent are to locally collect and organize data relevant to the training task, manage the training task (download, upload, and training), and interact with blockchain components (usually via a cryptographic wallet).
Aggregation Agent
This module is deployed on Aggregators' devices and is responsible for the aggregation process.
User Interfaces
Neurolite Network prioritizes user experience and ease of use. Therefore, the interactions of Task Publishers, Trainers, and Aggregators will be facilitated via a decentralized web application (DApp) that manages their engagement with the network.
Account Abstraction Module
To enhance the experience and security of the participants in the Neurolite Network, an Account Abstraction Module that can be deployed in supported wallets will be provided. Its goals are to enable seamless interaction leveraging session keys, sponsor Trainers and Aggregators transactions leveraging Paymasters, and more.
Decentralized Storage
Decentralized storage platforms such as IPFS are used to store model updates in a secured and encrypted manner.
Last updated