AI in Sewer

AI in Sewers

AI in Sewers

The importance of the pipe network beneath out feet

Our cities and suburbs are supported by a vast underground network of water, wastewater and storm water infrastructure. This network of gravity pipes, pumps and filtration systems play a very important role in the quality of our life, eliminating disease, safeguarding the environment, and protecting communities.

However, parts of this aging infrastructure are nearing the end of its useful life and now (more than ever) requires closer attention. Without attention, this situation is not sustainable.

Most of our water and wastewater infrastructure were installed during the 19th century and municipalities are facing the challenge of broad-scale infrastructure replacements or repairs costing hundreds of millions of dollars.

Adding to all this is the changing climate factor, meaning systems that were designed 30 or so years ago may not be sufficient to support everchanging environment around it.

To extend the life of infrastructure, reliance on smart city technology capabilities is critical. By creating visibility into buried assets to understand the conditions of underground infrastructure, utilities can compare current performance with expectations, and predict when and where problems may arise. This also leads the way to prioritisation of maintenance work, decreasing downtime of the assets, resulting in reduced interruptions.

Today’s technology

With sensors and actuators becoming more cost effective, an array of technologies is becoming available for the pipe industry. For pressure pipes or pipes transporting materials under high pressure, static sensors are being used to help monitor the health of the asset. In sewer and stormwater applications, inspection by video still is widely adopted with assessment being carried out visually by an expert.

With operational technology (OT) and information technology (IT) coming together, data that were once only available in isolated networks is now available via the world wide web. What this means is CCTV operators are no longer needing to download inspection videos to a hard-drive in order to assess the condition of the pipe in the office, instead they can upload the video file over the cloud.

AI at your service

With more data being available and accessible, a path has been paved for advanced technology such as artificial intelligence or AI. These smart algorithms feed on data, in-fact, the more data that is available, the quicker and more accurate an AI system can become.

Like other technologies, AI is tool to better understand a problem so to make data driven decisions. One of the areas where AI is helping the pipe industry is in the field of video processing. The traditional means of CCTV condition assessment presents several challenges including time taken to review the videos and identify defects, the operator subjectivity and field conditions making visual inspections difficult.

The AI models are pre-trained to detect certain anomalies, in this case pipe features and defects. The inspection video is then ingested and inferenced against the trained model. The result is the identification of the type and importance of anomalies.

VAPAR’s AI

Integrating the above-mentioned technologies, the VAPAR.Solutions platform leverages cloud computing and its AI engine to automatically assess inspection videos that users upload. The platform is accessed via any web browser where videos can be uploaded, analysed, manually audited by an expert (if required), with a report generated and stored, eliminating the need for hard drives to back up the video data and corresponding reports.

With this approach, both asset owners and CCTV contractors are reducing the time taken for assessments, standardising the process to remove any subjectivity and utilising AI to deep dive into the data to get better outcomes.

In 2020, VAPAR worked with asset owners in Victoria, Australia, where the results showed that the solution outperformed the same inspection carried out manually. The AI algorithm missed fewer defects and was more accurate in grading the pipes. To date, VAPAR has processed over 3 million images, which means the AI has only become quicker and more accurate.

Industry impact

With the need for the pipe network needing special attention, technology is adding another lens to take a closer look. It’s empowering engineers, operators, and decision-makers to make data driven decisions more cost effectively and proficiently.

Read more about our case studies here: CASE STUDIES

Inspection camera being lowered into sewer manhole for 3d digital examination

Top 5 technologies accelerating CCTV pipe inspection turnaround

Top 5 technologies accelerating CCTV pipe inspection turnaround

Inspection camera being lowered into sewer manhole for 3D digital examination

As the old adage goes, time is money. A significant amount of the cost for CCTV pipe inspections can be attributed to the equipment, site set-up, back office processing of deliverables and associated labour. The good news is there are new technologies available to CCTV contractors and asset owners alike that can be used to drive down the overall cost of the CCTV inspection process.

5G for cloud streaming and streamlined access to online tools

The pipe inspection process is a field-based task and has to happen wherever the pipe is located. Prior to the roll out of 5G, the transmission of large amounts of data (such as video data, network mapping data) to and from the inspection location was often time prohibitive. In Australia, with Telstra’s rollout of 5G many major city locations and some rural locations now have access to fast wireless data streaming services. We can expect to see many IoT (Internet of Things) and cloud streaming services being deployed in wastewater networks as the 5G coverage and adoption increases.

AI for advanced analytics

Artificial Intelligence (AI) comes in many shapes and sizes and can be utilised in multiple areas to aid the pipe inspection process. Not only are there applications like VAPAR’s that can automatically detect defects in pipes based on the inspection footage, but there are also statistical models that can predict pipe degradation, making the scoping of the next CCTV inspection package more targeted. AI has the potential to streamline both on-site activities as well as back-office activities, by taking out the manual parts of the inspection workflow.

APIs and integrations for data centralisation

Application Programming Interfaces or APIs are used to streamline data between different software tools (in particular, online software tools) without needing a person to manually export data from one system and format it or manually enter it into another system. 

When it comes to the pipe inspection and asset renewal process, there are many software tools involved. The process will typically start within an Asset Management System (or AMS) where pipes are selected for inspection. These pipes then need to be matched using a GIS system (Geographic Information System) so that operators know where underground the pipes are positioned, and how to gain access. Once the inspection data is captured, the results then require review before being entered back into GIS and AMS platforms.

The whole process can take several days, if not weeks, with different formats and spreadsheets and manual data entry required. Through the use of APIs, data being passed back and forth can be repeated and automated without the resource load and delay of having to manually match data in different systems each time.

There are a number of other uses for APIs in asset management given the number of different software tools that are involved in maintaining an asset throughout its lifecycle. 

Autonomous hardware control for finer movement

Many existing crawler systems have telemetry (movement) data available that is being under-utilised in the current method of capture. Building systems that use this data and either recommend or automate crawler movement can prevent the camera tipping and traction issues. Currently operators need to be very careful in their operation of crawler hardware and can risk losing the expensive camera gear in the pipe. Crawler manufacturers are looking for ways to utilise this telemetry data in a way that assists operators and speed up the capture process. The future of such technology, if paired with AI, could lead to fully autonomous inspections being carried out at a faster rate with lower risk to the hardware.

Computer vision

The concept of computer vision (CV) is to use the pixels in a digital image to better understand what is happening in the picture. Some common computer vision applications include edge detection and filtering “noise” from images. Computer vision can also be used to estimate measurements from an image and to track changes over a series of images. The combination of computer vision tools can be used to provide additional insights and estimation measurements within CCTV inspection footage. We may also see applications for CV that stitch images together to create a “street view” like rendering of pipes, creating a software alternative to the similar deliverables that can currently only be obtained using specialised 360 degree cameras.

—–

There is so much innovation that is happening in the CCTV inspection space, and there are many companies that are pushing the boundaries. Talk to your clients and suppliers about how they can include some of the above industry innovations into their delivery process, and you might find some savings and additional value. 

For further information about how you can streamline your CCTV inspection process with AI read more here.

CCTV Truck1

How to capture AI-friendly Pipe Inspection Footage

How to capture AI-friendly Pipe Inspection Footage

As VAPAR’s CTO, it’s safe to say I’ve got a good familiarity with which inspection footage works well (and which doesn’t) for automated pipe inspections using artificial intelligence (AI).

Over the last few years, the capability of image recognition AI models have improved significantly, meaning automation is a universally serious time-saver for many organisations looking to optimise or streamline their image based assessments. 

Although accuracy of artificial intelligence has improved over this time, the results which AI models are able to produce can sometimes be limited by the characteristics of the inspection footage which they are fed. If Contractors are looking to maximise the results they can achieve for themselves and their clients using AI, there’s definitely some recommendations I’ve observed which should be followed.

As different AI vendors may have different ways of handling challenges and developing solutions. I’ve tried to cover each point with a generalist approach. Many of these challenges would also be true of a person trying to provide a condition assessment based on the footage alone.

Challenges and Limitations

Firstly, to get some better context around the recommendations, I’ll outline the main challenges and limitations of AI for automated CCTV coding I’ve observed during my time with VAPAR.

Granularity

Generally, pipe inspection standards will define a number of codes to be used which require granular detail which is not reliably achievable for operators or software without quantitative computer vision and tracking of camera telemetry.

Sizing of Features

Determining the size of features within millimeter accuracy is a challenging task for software and human operators alike.

‘Clock’ Positioning

Using 12 segments (named to align with clock references) can be challenging depending on the amount of panning, tilting and zooming that the operator undertakes during the inspection.

Defects that look similar

There is a level of subjectivity in many of the inspection codes that are expected in the reports. If the inspection footage does not clearly show the issue, it is very hard for anyone reviewing it to produce an accurate report.

Start & Finish Nodes

Start nodes may not always be present in footage captured by CCTV contractors. Furthermore, the type of maintenance hole used to access pipes can be difficult for AI to ascertain. Inspection footage is typically started from the centreline of the maintenance hole pointed directly down the barrel of the pipe to be inspected. These nodes are typically evident to the CCTV operator as they require entry to perform the inspection. The other tricky thing about nodes is they often contain defects we would code in pipes, but would not code in the node (such as debris or cracking). I think more needs to be done around the inspection and reporting of node defects.

Continuous Defects

It can be difficult to determine whether defects are discrete or continuous when a CCTV camera is moving through a pipe. This is due to the capture of the defects jumping in and out of frame during camera operation (sometimes we see panning and tilting without the camera stopping).

Multiple Assets in a Single Video

Where a CCTV camera travels through more than one asset, AI will need a way of identifying this distinction and handling the condition assessment of the assets separately. Otherwise the defects detected would all be assumed to be part of a single pipe asset which is incorrect. It can be tricky to know if the node between 2 pipes is expected or unexpected, especially in locations where GIS does not have a full picture of the underground assets.

Multiple inspection time frames captured in a Single Video 

Where a camera operator approaches an issue that needs to be immediately resolved (such as a blockage), they can stop the recording of the footage, clear the issue, and resume recording again. Where the halted inspection footage and completed inspection footage for the same asset are in a single video, AI needs a way of identifying this distinction between previous or ‘abandoned’ footage vs.‘completed’, and then overriding the abandoned condition assessment with that of the completed footage.

Shape or Dimensions Change

Where pipe shape or dimensions change, quantifying the extent of this change can be difficult to determine when using visual inspection footage alone.

Smooth operator 🎵 (Smooth camera operation)

Not so much of an issue in standard pan/tilt/zoom footage, but in “push rod” footage where the speed of the camera moving affects the quality of the footage, it can give AI and human reviewers alike a headache when trying to review.

Poor visibility

Cleaning while capturing inspection footage can “mystify” the vision. We recommend cleaning first and then carrying out the inspection as, firstly, the inspection video is a lot shorter, and secondly, the defects are a lot easier to see in the pipe after cleaning. (There are also things that are less in operators control such as splashing from water inside the pipe and steam that make footage hard to review.)

Recommendations

Now that I’ve outlined the core problems we’ve encountered with AI for automated CCTV coding, let’s cover some tips to ensure you’re capturing AI-friendly pipe inspection footage:

Standardisations 

There are a number of standard procedures that operators can apply to ensure inspection footage is optimised for use with AI pipe assessments. Areas where standardised procedure can be introduced to great effect are:

  • Standardising the asset information block at the start of footage capture (on-screen display)
  • Standardising the chainage on-screen display positioning.
  • Standardising a requirement for the CCTV camera head to be centered within the pipe and field of view also centered (to see equally the top and bottom of the pipe).

Restrictions

There are also a number of procedural restrictions which CCTV operators can observe in order to create footage optimised for AI-based pipe assessments. These include:

  • Restriction of cleaning during capture of inspection footage (i.e. CCTV capture during jetting, where the jetting head is visible throughout the footage and obscures the field of view) used for condition assessment.
  • Restriction on reversing significant distances through the pipe – this can cause offsets in the chainage measurement and also cause problems for the AI, which will duplicate the detection of defects and features.
  • Restriction on zooming whilst moving (either driving forward or panning), as this can make this camera movement difficult to track.
  • Restriction on stopping and starting the capture of footage within a single video, i.e. where cleaning is performed or the camera is moved without recording, the inspection should be taken in a single pass.

These recommendations are some of the main components we’ve identified that have the ability to impact the post processing of video files – either by AI or by an inspector.

LinkedIn

Deep learning solutions in CCTV pipe inspections

Deep learning solutions in CCTV pipe inspections

Recently, I wrote a piece describing some of the machine learning challenges which I’d encountered during my time working with stormwater and sewer pipe inspection footage at VAPAR. Like any other industry, pipe infrastructure brings its fair share of issues that need to be resolved if valid and accurate results are to be obtained from AI models and provided to clients.

In this piece, I’d promised to also outline the deep-learning (which I’ll do here) and computer vision challenges (which I’ll be doing soon) that I’ve encountered during my VAPAR tenure so far.

So, with no further delay, let’s get into the deep learning struggles!

Correct Defect Identification (and Mixtures of Defects)

In total, we identify and categorise 80+ kinds of defects in pipe infrastructure – these are guided by the regional standard defect codes, such as those from Australia, NZ and UK. Since many of these defects are extremely similar, or define levels of severity within a defect type, distinguishing the difference between them is not an easy task. Even an experienced, specialised asset engineer won’t enjoy a high rate of success (and anecdotally the average accuracy industry-wide is 50%). With this in mind, it’s clear that performing this function accurately is also going to be challenging for an AI-based system.

Detecting the Proper Scale of a Defect

When a contractor records a defect, they will sometimes zoom in to inspect the defect more closely. Since our AI models identify and classify defects based directly from this footage, instances of zooming can lead these models to incorrectly classify defects as being larger than they actually are as a result of zooming. In turn, this will affect condition scores and repair recommendations which we would provide to a client.

Localization of defects

Another deep learning challenge which we encountered related to the localisation of identified defects. Localisation refers to the precise physical location of an identified defect within a pipe. Since the footage provided to us by clients does not contain any data relating to localisation or telemetry, we are immediately presented with a challenge – providing information relating to defect location through an AI model.

So, how did we overcome these problems?

Firstly, let’s discuss overcoming the challenges associated with correct defect identification, as well as instances of multiple defects within a single frame.

Solutions – Correct Defect Identification (and Mixtures of Defects)

For pipe inspections, VAPAR uses various defect classifications, initially identified by a pre-trained deep learning mode. However, based on my previous experience, I knew that only utilizing transfer learning and fine-tuning techniques, we could not achieve optimal results. 

Features in the final layer of our pre-trained model (before the classification layer) usually have small dimensions, which is not suitable for our application. Since we deal with a large range of defect types, the core problem to solve was establishing how to alter the layers in our pre-trained convolutional neural network to achieve optimum results. We eventually managed to achieve this by combining our domain and application-specific knowledge with our nuanced understanding about the deep learning and convolutional neural network.

An example of the imbalances in data which we successfully accounted for


Imbalances in data was another issue when aiming to optimize our defect identification. The graph above illustrates the distribution of training data which our deep learning model utilised. In this graph, it’s clearly evident that certain classifications are far more prevalent within the data set, which influences the results of the model toward those categories most commonly represented in the data set – potentially skewing classification when performing inspections for clients.

To resolve this imbalance, researchers and experts typically use two techniques – balance sampling and weighted loss function. In this instance, we utilized the combination of these two techniques to help us take the most out of our model and improve performance by around 25 percent.

To combat the issue of having multiple defects in one frame (see the image below for an example), we combined the results of our deep learning model with our machine learning model, and developed an AI-based algorithm to effectively account for instances of multiple defects.

An example of multiple defects found within a single frame

Solutions – Detecting the Proper Scale of a Defect

To correctly determine the proper scale of identified defects, we defined and developed a new AI model, taking the benefits of the three most important AI techniques (computer vision, machine learning and deep learning). First we developed a deep learning-based solution for measuring the scale of the defect in each relevant frame. Then we utilized a machine learning solution to find similarities between different frames. Within the machine learning model, we utilized computer vision techniques to provide the data required for the model. Successfully executing this solution allowed us to deliver strong performance when dealing in accounting for camera zoom to correctly capture the scale of defects.

Localization of defects

Personally, I found solving the issues around localisation to be the most satisfying resolution of all those which I’ve outlined so far.

With available deep learning and image segmentation techniques along with the right dataset, localization is an achievable task to undertake. However, most industrial projects (like ours) carry huge time and cost requirements if this data is to be provided, 

This left us at a crossroads – do we abandon this functionality for our clients? Perhaps, an ordinary team might, but I’m proud to say our innovative and driven team managed to come up with a fantastic solution, using the latest state-of-the-art innovation in the deep learning discipline.

The images below illustrates some of the results obtained with our solution. 

Original ImageLocalisation data

For me, the coolest part of our innovative solution is that we can classify and localize the defect at same time without any memory or time cost.


=================

Interested in learning about the computer vision challenges in CCTV pipe inspections (and how we’ve overcome them)? Stay tuned for future blogs!

Alternatively, check out the piece we already completed relating to machine learning challenges.

=================

Saeed Amirgholipour PhD. is an AI Architect, Full Stack Data Scientist, and Data Science/ AI Lead Trainer with over 10 years of industry experience, including CSIRO’s Data61, Australia’s leading data innovation group. His experience spans end-to-end large-scale innovative AI, Data Science, and analytics solutions. Saeed has a passion for solving complex business problems utilizing Machine Learning (ML) and Deep Learning (DL) models.

Pipes bring unique machine learning challenges. How can we overcome these?

Pipes bring unique machine learning challenges. How can we overcome these?

I joined VAPAR as their Lead Data Scientist in May, meaning that I’ve just passed my first six months as a ‘Vaparino’. So far, it’s been a really interesting journey; the water industry brings some unique challenges that have forced me to learn and solve uncommon problems in my role.

Firstly, some context – VAPAR is an Australia-based start-up which provides end-to-end AI-based solutions for assessments of assets; specifically sewer and stormwater pipes. VAPAR’s platform performs automated inspections of these pipes to find defects, report the exact locations of the defect and provide repair recommendations to our clients. 

For asset owners like Councils and Water Utilities, time and money are hugely important, connected considerations. Because of the massive lengths of networks these asset owners are responsible for managing, pipes will sometimes go decades without receiving an inspection. This means that asset owners are at risk of not addressing critical issues on time, increasing the risk of service disruption dramatically. That’s where VAPAR can improve the process; we provide time-efficient, automated assessments that allow asset owners to save on pipe repairs, and also protect against unplanned repairs.

As a Data Scientist, the most crucial lesson to remember for a new use-case is to get acquainted with your data as deeply as possible. Once you’ve got a solid understanding of the data, you’re able to imagine solutions to your problems with far greater ease. Being new to the water industry, there were two key challenges related to machine learning that needed to be overcome. 

Duplicate Defect Reporting

When a contractor is recording pipe inspection footage, they will move the camera through the pipe at inconsistent speeds, including instances where the camera is stationary for a couple of seconds. They will also tilt the camera head around to get a better look at the inside of the pipe.

Due to the significant computational cost involved, our platform samples frames rather than analyse every frame from a piece of footage. Because of this, and combined with the inconsistent speed of camera movement and operation, it’s possible for a defect to be observed initially from a distance, before being reported a second time when still in the field of view. 

Such duplication of defects can have a huge effect on the data we use to provide advice to our clients. If left unaddressed, this could lead to inaccurate, misleading repair recommendations which could result in clients assigning budget for repairs to pipes which didn’t require them, creating huge inefficiencies.

Start and end node detection

Start and end nodes in the context of pipe infrastructure refers to the beginning and end of each piece of pipe infrastructure (where a maintenance cover will be located). For stormwater pipes, these might be the grates where stormwater runs off from the street and into the pipes.

Because start and end nodes are the points at which cameras are both inserted and retracted from, there will often be footage captured of them in the pipe inspection footage which can cause defects to be identified which aren’t relevant to the pipe condition assessment. A common example is when the inspection camera points directly upwards at the vertical well leading to the surface, or pans around to capture the other pipe connections to the node.

If we were to leave reported defects from start and end nodes in our data without adjustment, we would be reporting large volumes of irrelevant defects in our pipe condition assessments for our clients. 

How we approached these problems

In order to develop models which could account for these problems, we developed a process which would allow our machine learning models to identify instances where they occurred, and take appropriate action to prevent these problems from impacting the results and recommendations which we provided to our clients.

When a client uploads footage to our platform, it provides initial defect detection as an output. If this data were to be provided to the client immediately, we simply wouldn’t be providing accurate advice or recommendations.

Instead, the VAPAR team took the data that was initially output by our platform, and performed some further automated analysis and data preparation steps on it, taking in additional factors about the footage.

Some of our exploratory analysis relating to data distribution among features in cases of duplicte defects


Once preparation was finalised, we fed this pre-processed data into a machine learning model, where, combined with the distance information from the dataset, we were able to define a model which could automatically identify duplicate frames.

The results for duplicate defect detection after the development of the new AI model



Using this same methodology, we were also able to define a model which would exclude defects found in start and end nodes from the analysis and recommendations provided to our clients.

The results for start and end node detection after the development of the new AI model

Without a deep understanding of the data and the challenges it poses, these solutions would have been very challenging to identify, test and implement. Domain knowledge of your use case is key to developing robust, dynamic solutions to provide the best possible outcomes.


=================


Interested in learning about the computer vision and deep learning problems in CCTV pipe inspections (and how we’ve overcome them)? Stay tuned for future blogs!

=================

Saeed Amirgholipour PhD. is an AI Architect, Full Stack Data Scientist, and Data Science/ AI Lead Trainer with over 10 years of industry experience, including CSIRO’s Data61, Australia’s leading data innovation group. His experience spans across end-to-end large-scale innovative AI, Data Science, and analytics solutions. Saeed has a passion for solving complex business problems utilizing Machine Learning (ML) and Deep Learning (DL) models.

microsoft round table.webp

Microsoft roundtable discussion with Satya Nadella, Global CEO of Microsoft

Microsoft roundtable discussion with Satya Nadella, Global CEO of Microsoft

This week, VAPAR was honoured to be one of eight companies involved in the Microsoft round table discussion with none other than Satya Nadella himself! Amongst his passion for Microsoft Solutions, was his messaging for the Partner network to ensure our customers succeed – even if a competitive solution to Microsoft is a better fit for the project. This was a great reminder to keep the customer at the heart of what we all do to continue to add value.

We’d like to thank Microsoft Australia for the opportunity as it allowed us to obtain such an inspirational takeaway.

a27d24_c1a134947ac142bfa451827fe3d96ed3_mv2_d_4928_3264_s_4_2.webp

Automation and why it’s not taking your job

Automation and why it’s not taking your job

Automation is a hot topic and is reaching new applications every day, but this should be embraced instead of feared.

Image showing automation

Automation used to be seen as some sort of wizardry which was confined to the world of IT and software engineers. The intent of automation is to take out the data intensive, risky or precision requiring tasks to remove error, repetitive strain and risk of injury for the people who had to do these jobs.

Machines and programs are literally manufactured to bear this load. Humans only have one body and, in some industries, it only takes one incident to change your life. Also, does anyone actually enjoy doing the same mind-numbing task over and over again for a year let alone as a career?

The thing that separates people from machines is our creativity, analytical skills and problem solving skills. Studying current applications, if automation is done well it frees up people from doing the jobs they hate and gives them time to focus on the more cognitively difficult processes that require human decision making. This is significantly more fulfilling than “going through the motions” on a process that is repetitive and easily automatable.

On top of this, people are always going to be needed where automation is introduced. Every automation process has “exceptions”, which are cases that don’t fit the bill. These exceptions are traditionally complex or confusing and require detailed analysis. Automation takes away the pressure to get through more cases while allowing you to focus on the cases that really need your attention.

As automation gets introduced to each industry, the industry evolves and expands, and the same people can do so much more fulfilling, value adding tasks.

If you’re interested in hearing more, there is a great TED talk you can watch here:

Read how VAPAR with its AI software is helping councils and utilities improve their sewer inspections here.

output-onlinepngtools (33)

When is automation the answer?

Managing large infrastructure asset databases can be time and resource intensive.

But is automation the answer?

Automation is best done for tasks that are:

  1. Manual
  2. Repetitive
  3. High frequency

These types of tasks have ongoing implications for operational expenditure if not automated.

What are some real-world applications for automation in your water business?

If your business manages underground stormwater or sewer pipes, there is now a product to automate the condition assessment and mapping from inspection footage – learn more.

CCTV review – what is it currently costing me?

Engineers spend hundreds of hours a year reviewing, interpreting and creating work orders from the inspection footage results.

The workflow is manual, repetitive and (depending on level of compliance to best practice asset management standards) is done at a high frequency.

The cost per metre is related to the internal operational costs for reviewing the footage, extracting the key data and adding the information to various enterprise databases (like GIS, ERP systems, etc).

If the inspection footage is reviewed, interpreted and recorded in a timely manner then there are further savings and benefits in terms of:

  • Identification and verification of defects that operators miss – picking up issues before they become more serious (and expensive!) problems.
  • Better forecasting in long term financial budgets for maintenance based on actual, unbiased existing asset condition.
  • Better planning of workforce requirements for short term and long term maintenance.

Is there an automated solution for this work?

Yes! Now there is finally a way to review, interpret and record the condition of underground pipes, based on the CCTV inspection footage. It’s all done through a web platform.

Find out more about the web platform and pricing here.