More building, less plumbing.
Orbital handles the stitching between your microservices

Orbital handles the stitching between your microservices
Microservices are great, but they bring new complexity stitching together multiple APIs - finding the right APIs, and countless generated clients. As APIs change, integrations break.
Your teams are already writing API specs (OpenAPI, Protobuf, Avro, etc). Orbital uses these API specs, along with semantic metadata to build integration on-the-fly.
As things change, Orbital automatically adapts, so you can focus on features, rather than repairing broken integration.
Orbital eliminates integration code, rather than shifting it to another tool.
There's no resolvers to maintain, API clients to generate, or field mappings YAML files.
Drive everything from your API specs, and deploy from Git
Define a set of terms, and embed them in your API specs
Use those same terms to query for data.
Orbital handles connecting to the right systems
// Define semantic types for the attributes
// that are shared between systems.
type MovieId inherits Int
type MovieTitle inherits String
type ReviewScore inherits Int
type AwardTitle inherits String
find { Movies(ReleaseYear > 2018)[] }
as {
// Consumers define the schema they want.
// Orbital works out where to fetch data from,
// and builds integration on demand.
title : MovieTitle // .. read from a db
review : ReviewScore // .. call a REST API to find this
awards : AwardTitle[] // ... and a gRPC service to find this.
}
Stitch your databases, message queues, RESTful APIs and gRPC services together, without requiring a GraphQL adaptor.
Powered by Taxi - our open source metadata language - you can seamlessly link between multiple data sources, without needing a specialist adaptor
The tools you want from your middleware, redesigned to delight engineers and fit snugly in their existing tooling and workflows.
Enable caching at the Edge, to instantly increase performance, drop latency, and reduce the load on your services.
See full traces of which systems were called as part of your query and what data was provided.
Scale without limits, and keep your resource costs low, with full isolation for each query.
Got another gnarly question? We'd love to hear it. Come and chat on Slack.