



Extra data, extra variation in data, increased and completely different resolutions of data, scheduled or event-triggered data and the necessity to course of giant quantities of data in as near real-time as doable: these are the real-life data challenges that utilities across the globe are going through as they optimize their grid and fulfill regulatory necessities with every kind of latest data sources and choice assist methods.
Greenbird’s Utilihive platform is designed to deal with massive data. And plenty of it!
Utilihive allows real-time data flows inside Superior Metering Infrastructure (AMI), Sensible Grid, Distributed Vitality Useful resource Administration (DERM) and different areas reminiscent of our Utilihive Datalake.
Performance and scalability are key.
Listed here are some examples, each from check bench situations and actual buyer circumstances.
Instance 1: Speedy Processing of Sensible Metering Data
Ideally, Superior Metering Infrastructure strikes data shortly and securely from a number of Headend Techniques (HES) to a Meter Data Administration system. Generally, data vital for grid operations reminiscent of outage occasions, wants to enter ADMS or different operational options. Transporting and reworking the data into the required codecs is a core process for our Utilihive platform.
Right here’s how this works in observe:
- Certainly one of our clients is presently in the course of a multi-million sensible meter roll-out.
- They requested us to carry out a scalability check to exhibit Utilihive’s data dealing with with messages from 9 million meters.
- For this buyer, Utilihive runs in a personal data heart and collects 15-minute values from six registers for each sensible meter.
- We used the identical metrics within the scalability check, simulating a two-hour data batch with 432 million meter values.
Utilihive processed, remodeled, and handed off all data in underneath quarter-hour.
For full transparency:
- The check for 9 million meters was carried out in an on-premise surroundings with 10 nodes (8CPU cores and 96 GB RAM every).
- The {hardware} proved to be outsized, as CPU utilization by no means went above 40%.
- Reminiscence utilization didn’t even go considerably above 10%.


Instance 2: Processing Data From A number of Energy Grid Sources
- One other buyer makes use of Utilihive Datalake to retailer and analyze data from the grid.
- In whole, we built-in and provisioned data from completely different sources, amounting to roughly 70 billion readings from substations all the best way to house-hold sensible meters.
- Our buyer had beforehand developed a reporting system utilizing queries from this data. Earlier than they applied Utilihive, this report would take them round three days to compute and end.
Once they applied the identical report utilizing Utilihive Datalake, it was generated in just below 10 minutes.
If you happen to discover that tough to imagine, so did our consumer. They informed us that that they had re-run the report a number of occasions, as a result of they thought the outcomes have been unattainable.
Extra information from Greenbird
How Can Utilihive Obtain Outcomes Like These?
Utilihive is completely different from different integration platforms or ESBs. How? It’s constructed as community of reactive microservices (service mesh), following the actor mannequin for extremely concurrent purposes and the precept of event-driven structure. This permits for prime efficiency, dynamic scalability and ensures excessive availability by design. As well as, we will usually compress data to beneath 5% of the unique data dimension, additional boosting data processing efficiency.
Sooner or later, utilities might want to deal with rising portions of data from sensors, meters, native producers and far more. We are saying: Convey it on!
0 Comments