Components

OpenLMIS v3 uses a micro-services Architecture with different services each providing different APIs.

Each component below has its own Git repository, API docs and ERD. Many services below also have a corresponding UI component (e.g. Auth UI, Requisition UI). The Reference UI builds all of these UI components together into one web application.

Logging into the Live Documentation

The live documentation links below connect directly to our API Console docs on our CI server. To use the API you’ll first need to get an access token from the Auth service, and then you’ll need to give that token when using one of the RESTful operations.

Obtaining an access token:

  1. Go to the Auth service’s POST /api/oauth/token
  2. Click Try it in the top right of the tab
  3. In the Authentication section, enter username user-client and password changeme
  4. In the Query Parameters section, enter username administrator and password password
  5. Click Authorize under password
  6. Enter the username administrator and password password
  7. Click Post
  8. In the Response box, copy the UUID value of the access_token. e.g. "access_token": "a93bcab7-aaf5-43fe-9301-76c526698898" copy a93bcab7-aaf5-43fe-9301-76c526698898 to use later
  9. Use the Authorization Token you just copied with every request.
    • In the live documentation using Try It, type bearer followed by the access_token you copied earlier into the Authorization header. e.g. bearer a93bcab7-aaf5-43fe-9301-76c526698898
    • Alternatively, in any other HTTP request tool (e.g. Postman) you may append it in the query parameters using the access_token field. e.g. GET https://test.openlmis.org/api/facilities?access_token=a93bcab7-aaf5-43fe-9301-76c526698898

Auth Service

Auth Service provides RESTful API endpoints for Authentication and Authorization. It holds user security credentials, handles password resets, and also manages API keys. It uses OAuth2. The Auth Service works with the Reference Data service to handle role-based access controls. (See the Auth Service README for details.)

Fulfillment Service

Fulfillment Service provides RESTful API endpoints for orders, shipments, and proofs of delivery. It supports fulfillment within OpenLMIS as well as external fulfillment using external ERP warehouse systems.

CCE Service

The Cold Chain Equipment (CCE) Service provides RESTful API endpoints for managing a CCE catalog, inventory (tracking equipment at locations) and functional status. The catalog can use the WHO PQS.

HAPI FHIR Service

The HAPI FHIR Service provides RESTful API endpoints for FHIR locations. It supports keeping OpenLMIS facility data in sync with external facility registries through FHIR.

Notification Service

The Notification Service provides RESTful API endpoints that allow other OpenLMIS services to send email notifications to users. The Notification Service does not provide a web UI.

Reference Data Service

The Reference Data Service provides RESTful API endpoints that provide master lists of reference data including users, facilities, programs, products, schedules, and more. Most other OpenLMIS services depend on Reference Data Service. Many of these master lists can be loaded into OpenLMIS in bulk using the Reference Data Seed Tool or can be added and edited individually using the Reference Data Service APIs.

Reference UI

The OpenLMIS Reference UI is a single page application that is compiled from multiple UI repositories. The Reference UI is similar to the OpenLMIS-Ref-Distro, in that it’s an example deployment for implementers to use.

Learn about the Reference UI:

  • OpenLMIS UI Overview describes the UI architecture and tooling
  • UI Styleguide shows examples and best practices for many re-usable components
  • Dev UI documents the build process and commands used by all UI components

Coding and Customizing the UI:

UI Repositories:

Report Service

The Report Service provides RESTful API endpoints for generating printed / banded reports. It owns report storage, generation (including in PDF format), and seeding rights that users may be given.

Requisition Service

The Requisition Service provides RESTful API endpoints for a robust requisition workflow used in pull-based supply chains for requesting more stock on a schedule through an administrative hierarchy. Requisitions are initiated, filled out, submitted, and approved based on configuration. Requisition Templates control what information is collected on the Requisition form for different programs and facilities.

Stock Management Service

The Stock Management Service provides RESTful API endpoints for creating electronic stock cards and recording stock transactions over time.

Diagnostics Service

The Diagnostics Service provides RESTful API endpoints for checking the system health.

Reporting and Analytics Platform

OpenLMIS includes a reporting and analytics platform that extracts the data from each microservice, streams it to a data warehouse and provides a scalable reporting and dashboard interface. This reporting platform is made of multiple open source components, Apache Nifi, Apache Kafka, Druid and Apache SuperSet. This section provides an overview of each of the components of the reporting and analytics platform.

Nifi

NiFi is used for pulling data from OpenLMIS’s APIs, merging data from the APIs into a single schema, and transforming the data into a format that’s easy to query in Druid. Currently, NiFi blends data from the stockCardSummaries API and the referenceData API. It splits stock cards into line items and merges reference data with those line items, to have a single schema where stock card transactions (line items) contain detailed reference data like facility name, commodity type name, etc. instead of the reference data ids that natively live on the transaction in the stock management module. NiFi functions like an assembly line, where data moves from “processor” to processor throughout the “flow file.”

Kafka

Kafka is used for stream processing and passing the data from NiFi to Druid. It works on a publish-subscribe model, similar to how message queues in an enterprise messaging systems work. Kafka is run on a cluster on one or more servers. A Kafka cluster stores streams of “records” in categories called “topics.” A record consists of three parts: a key, a value, and a timestamp. A Kafka topic receives the transformed transaction from NiFi and publishes it to the Druid “supervisor.” The Druid supervisor is always listening for updates from Kafka, and indexes the data immediately.

Druid

Druid is a distributed column-oriented OLAP database that the reporting stack uses for data storage and querying. Druid is purpose-built for querying streaming data sets at scale. Each set of data is called a “data source.” JSON is the default language used for querying in Druid and is what the DISC indicators use. Druid also includes support for SQL using Apache Calcite, although this is not yet something we’ve explored. You can find documentation on querying in Druid using JSON here.

Superset

Superset is the visualization layer of the reporting stack and is used to create self-service dashboards on the data in Druid. It’s very closely integrated with Druid, and will detect the schema for each data source and the data therein. “Dimensions” are akin to columns within a relational database, and “metrics” are calculations performed on those dimensions - e.g. count distinct, sum, min, max. Typically “metrics” are written off of numeric dimensions, with the exception of count distinct. Superset is the UI in which we write JSON queries for Druid to calculate metrics that are more sophisticated than the basic types outlined above.

Slices are individual visualizations and can be listed by clicking on the Charts tab along the top. Each slice has a visualization type, a data source, and one or more metrics and dimensions that you want to display. Superset supports the development of custom visualization types if it’s not included in the default list provided by Apache.

A dashboard is an assembly of slices onto a single page. Filters can be applied at the dashboard-level, and filter all slices sharing the filter’s data source to the specified dimension. Filters can also be used to manipulate date ranges. With proper security (more information below), users can save custom private or public versions of dashboards, and drill into a particular slice to modify it and construct an ad hoc visualization.

Security is handled via User Roles and Users. A User is a distinct login with a password, and is tied to an email address. There can only be one User per email address. A User Role is the list of actions that a User can do in Superset. Superset contains three User Roles by default, but they can be customized by duplicating the defaults and adding or removing permissions.

  • Gamma - a view-only user who can save private views of dashboards and slices
  • Alpha - a power user who is able to view all data sources, and create public dashboards and slices
  • Admin - administrator with all access