Deep Dive into MVI (On the Edge)

Michael Rawlinson

May 4, 2026

Since posting my first blog about Maximo Visual Inspection (MVI), I’ve been on a bit of a journey, digging deeper into its features, exploring new use cases, and discovering just how versatile this technology can be.  

In this post, I want to share a more in-depth look at what MVI has to offer, the technical details and the different ways you can use it to solve real-world problems. Whether you’re curious about how MVI works under the hood, interested in deploying it on the edge, or just looking for practical examples, I’ll cover the insights and lessons I’ve picked up so far.

Under the Hood

MVI combines image and video analytics with deep learning models to perform classification, object detection, and anomaly detection. What sets MVI apart is its accessibility, users can train models with minimal technical expertise, making advanced AI available to a broader range of organizations.

  • Model Training: Users upload images or videos, label data, and train models directly within the MVI interface.
  • Deployment: Once trained, models can be deployed to inspect assets at scale, whether in the cloud or at the edge.
  • Integration: MVI integrates with Monitor, seamlessly allowing the user to set rules on how certain results are handled. It also boasts a robust API allowing for integrations into other third-party services and Manage.

Training

To train a model using MVI, you’ll need at least one GPU (yes, that’s the expensive part). From there, the process is straightforward:

  • Create a Dataset: Gather a set of images (preferably 20+).
  • Label the Data: Mark where faults or anomalies appear in each image.
  • Augment the Data: Increase dataset diversity for better accuracy.
  • Train the Model: Choose an algorithm based on your use case.
  • Retrain the Model: Often the model will need retrained to refine its accuracy.
  • Deploy: Deploy the trained model to the cloud or edge server.


Once trained these models are also exportable. Allowing for models to be used on multiple systems. And if you no longer need to refine your model you can leave it here and the expensive part is done.


The process doesn’t stop after deployment. Here’s what you can do next:

  • Labeling: Use the deployed model to automatically label new images in datasets, helping the model continuously improve and reducing some of the time needed to retrain/refine models.
  • Model Refinement: Create generalized models and then fine-tune them for specific needs. For example, a model trained on fire extinguishers can be retrained with additional images of a different type of fire extinguisher, refining the existing model without starting from scratch.

General Use

When I attended a recent Maximo User Group, I noticed there was a bit of a misunderstanding about how MVI is used in industry. Many people seemed to think of MVI in a very specific way - imagine a camera fixed above a conveyor belt on a production line, analyzing parts in a stable, unchanging environment. And yes, that’s a classic use case, but it’s just scratching the surface.

What I’ve discovered is that MVI is far more dynamic and versatile than many realize. It’s not limited to static setups. For example, a technician can use MVI for in-person inspections, simply by snapping a photo of an asset while walking around a factory. Or, you can use live feeds from security cameras along train tracks to automatically check for obstructions. At the other end of the spectrum, you can even have drones flying over a solar farm, with MVI monitoring the footage live or analyzing it later as a batch inspection.

The general gist? MVI adapts to your needs. It’s not about fitting your process to the tool-it’s about letting the tool fit your use case, whether that’s fixed, mobile, live, or after the fact.

MVI on the Edge

Edge deployment means running MVI’s AI models on devices or servers closer to the assets. A simple example is a camera linked directly to a local edge server. The camera streams data to this server with minimal latency, where the output is continuously analyzed, ready for the model to trigger actions.

Key Advantages

  • Real-Time Insights: Edge devices or nodes process images and videos instantly, enabling immediate detection of faults or anomalies.
  • Reduced Latency: Decisions can be made locally, without waiting for multiple API calls with a cloud server.
  • Bandwidth Efficiency: Only relevant results (not raw images) are sent to central systems, saving network resources.
  • Enhanced Security: Sensitive asset data can be processed and stored locally or regionally, reducing exposure risks.

Using the MVI Edge Server

Using an Edge server is surprisingly straightforward once you get started. The first step is connecting it to your MVI training server using an API key. That connection is what unlocks everything. From there, you can pull down the models you want to run locally on the Edge.


Once the models are in place, you set up your input sources. You can choose from image folders, video folders, or live camera feeds, depending on how you want to inspect your data. Whether it’s a constant live stream or a batch of images and videos dropped in as needed, the Edge server handles it cleanly and efficiently.

Next, you create a station. Think of this as a logical grouping for your inspections, keeping related results together in one place. Within a station, you then define individual inspections. Each inspection lets you decide which model to use, how it should behave, and what triggers it, whether that’s on a schedule or driven by an MQTT message.


Finally, everything comes together in the dashboard. From here, you can see all your inspections at a glance and instantly understand their current state. You can track what’s been processed, review results as they come in, and quickly spot anything that needs attention.

Example Use Case

A solar farm deploys drones equipped with edge AI to inspect panels. The drones analyze footage in real time, flagging damaged panels and sending only actionable alerts to the central Maximo system. Field technicians receive targeted work orders, minimizing manual inspection time and maximizing asset uptime.

Technical Considerations for Edge Deployment

One of the great things about edge deployment is flexibility, the servers or devices running the models can vary depending on your use case and the model’s requirements. I recently attended the UK&I MUG with Naviam there we saw a demo in which IBM had an edge server running on a piece of hardware that was nearly 15 years old, with no GPU and only 16 GB of RAM.  

The rule still applies you get what you pay for. A newer server will give you better speed and reliability, but even modest hardware can handle certain workloads without breaking the bank.

Real-World Impact

  • Manufacturing: Detecting wear and tear on machinery before breakdowns occur
  • Energy & Utilities: Inspecting pipelines, power lines, and solar panels in remote or hazardous locations
  • Transportation: Monitoring infrastructure (e.g., railways, bridges) for corrosion or damage, with instant alerts for maintenance crews

MVI’s edge capabilities represent a leap forward in asset management-delivering real-time, actionable insights where they matter most. Allowing teams to train and deploy models with little to no real experience with vision models. In the next blog I will go deeper into Naviam’s solution where we bridge the gap integrating MVI inspections (Both on the Edge and Cloud) directly into MAS Manage.

Unlock the Ultimate Guide to IBM Maximo Application Suite (MAS)

Discover everything you need to know to modernize your asset management strategy.

Inside, you’ll learn:

  • What’s new in IBM Maximo Application Suite 9.0
  • Key differences between Maximo 7.6 and MAS
  • How AppPoints and OpenShift change the game
  • Industry use cases across energy, manufacturing, and transportation
  • Step-by-step guidance for upgrading and migration readiness
Cover of 'The Ultimate Guide to MAS Maximo Application Suite' by Naviam featuring a man in a yellow construction helmet and safety vest holding a tablet.
×

ActiveG, BPD Zenith, EAM Swiss, InterPro Solutions, Lexco, Peacock Engineering, Projetech, Sharptree, and ZNAPZ have united under one brand: Naviam.

You’ll be redirected to the most relevant page at Naviam.io in a few seconds — or you can go now.

Read Press Release