
DISCOVER PROJECT
4 min read
Streamverse
Built a 0 to 1 product in a two-week design sprint that saved >10L in the first month of operation!
MY ROLE
UI, UX, PRODUCT(PARTLY)
DURATION
4 months
TEAM
2
TL;DR
Streamverse is a Streaming Platform as a Service (SPaaS) that helps teams create real-time data processing pipelines. It lets users act on data as soon as it’s generated.
We built the first version in a 2-week sprint to test the idea. It turned out to be a huge win, saving over ₹10L in just 4 weeks. The success led to it becoming a full internal platform that product managers could use directly, without needing data engineers.
The Initial Challenge
Creating streaming jobs took too long. Every team had to go through the data engineering team, which slowed things down and created a dependency.
The goal was to cut this lead time and give more control to the backend teams, and later to product managers.
CURRENT USER JOURNEY
Earlier, creating a streaming job had 7 steps and always had to go through the data team.
This meant bottlenecks, delays and often dropped plans.
UNDERSTANDING USERS
Data Engineers
Strength: Jobs creation, monitoring & data architecture expertise
Weakness: Bottleneck, Bandwidth crunch
Backend Engineers
Strengths: Context of jobs creation, capable to use a low-code streaming platform
Weakness: Technical Know-How of stream frameworks
BREAK DOWN THE WALLS
This wasn’t a simple design problem. We had to understand how data pipelines actually work.
We started with interviewing the data engineering director to unpack the architecture, the tools, the process, and the jargon. Only after that could we start thinking about simplifying it through design.
We then interviewed a normal backend engineer to match how much information we laid out se easily comprehensible by them, to understand the floor of how much we needed to abstract
DECIDE🤔
Hypothesis: To reduce the lead time of creating a streaming job by ~80% and ultimately remove the dependency on the DE team completely, in addition to bringing discoverability, bookkeeping and observability to all the streaming jobs.
Our new flow reduced it to 2 steps. Backend engineers could now build pipelines on their own.
This freed up the data team, made timelines predictable, and gave more ownership to individual pods.
[Future Scope] Simplifying the system for product managers to directly create such pipelines.
DESIGN?❌ SKETCH FIRST✅
Before jumping to Figma, we used sketches and paper prototypes.
We showed them to the PM and a few users to check if we were thinking in the right direction. It helped us avoid early design waste and got us faster feedback.


We made cutouts to do some rapid paper prototyping and get instant feedback without limiting what papers could do
FINAL DESIGN
The first version supported basic pipeline creation with configurable nodes and clean layouts.
We used a shared design system to keep things consistent and easy to expand.
1. A listing view that displays all jobs with key attributes
Job Name → Easy identification, follows a naming convention.
Status → Real-time operational health (Running, Failed, Paused).
Created By → Tracks ownership and accountability.
Running Since → Shows job uptime, helping spot anomalies.
Actions (Start/Stop) → Direct controls to manage jobs without drilling down.
1. Pipeline Canvas as a Visual Metaphor
Why: Real-time data pipelines are inherently complex (sources, operators, destinations). A visual canvas helps abstract this complexity into a flowchart-like representation that’s intuitive. Reduces learning curve, allows teams to debug and communicate about pipelines faster.
Decision: Left-to-right directional flow mirrors natural reading order, making it easier to trace data lineage.
2. Node Classification (Source, Operator, Destination)
Why: Breaking pipelines into three categories simplifies mental models—users don’t have to think in terms of raw APIs or infrastructure, but rather "where does my data come from → how is it processed → where does it go."
Decision:
Source: Entry point for real-time data streams (e.g., Kafka).
Operator: Transformation/processing (e.g., JDBC).
Destination: Data sinks like Redshift, Kafka, or other services.
3. Contextual Details Pane (Right Sidebar)
Why: Pipelines often require node-specific metadata (Kafka topic, schema type, retry strategies).
Decision: A collapsible details panel opens when a node is selected, allowing contextual editing without losing track of the pipeline flow.
4. Error Feedback & Debugging
Why: Users need immediate awareness if their configuration/code is invalid. This provides a faster feedback loop, aligns with developer workflows (similar to IDEs).
Decision: Inline error console at the bottom (red banner for visibility).
Test (and Present)
I then presented our designs for buy-in from frontend and backend leads and shipped the designs after iterating on some minor changes suggested in the meeting
LEARNINGS 🙌🏼
*1
Scope out properly!
The most important thing in these sprints is the items you decide to work on. Pick a lot of items & you’ll see over-stretching yourself every day (which happened with us), pick less & the sprints won’t be as impactful.
*2
Tech-Product-Design sync
The development for this sprint was delayed by a couple of weeks. Maintaining proper sync with tech teams, and aligning with their bandwidth before conducting a design sprint would ensure we design efficiently and are effective in our process.
Metrics
Streamverse was one of the few projects that made a huge impact on cost & TAT reduction in just the first month of its operation










