About


Problem

Construction projects are currently notorious for being behind schedule, over budget, and dangerous for construction workers. According to the Bureau of Labour Statistics’ 2015 Census of Fatal Occupational Injuries, construction is the leading industry in worker injuries and fatalities. Autonomous technology can have a huge positive impact on construction, increasing efficiency and lowering worker exposure to dangerous situations. However, construction sites are ad-hoc environments with constantly changing temporary roads, materials being moved, and structures being built. This can be a very difficult environment for autonomous solutions to navigate.

Solution

In our Capstone Project we aim to build a solution that autonomously builds an accurate and up to date 3D model of a given construction site. This will be done using a drone. The site model will contain a 3D spatial occupancy grid as well as labelled roads and points of interest. This information will be provided to other systems on the site wirelessly, enabling autonomous solutions to navigate the site and work together.

Team


Progress Blog


Promotional Video

March 22

Symposium Preparation

March 20

With just a few days leading up to symposium, the project reaches its final stage as the hardware/software platform is locked for demo day. Some refinement on the data pipeline as well as full integration of the GoPro onto the drone chassis were the final touches required before demo day. The image below shows the final hardware configuration of the drone, as it will be presented on demo day.

Our final drone hardware.

The team collected its final dataset this past weekend and has collected video footage of the drone in action. For symposium, the team plans to have a short (1min) clip on display alongside the poster board, both are shown below. Additionally, the aim is to include two interactive elements: a display showing the 3-D Model of the site and an iPad with the web app loaded and ready for annotation. You may notice some blank space left on the poster, this is simply to allow for some space for the displays to sit on the table without obscuring any of the poster content.

Symposium poster.

The final revision of the annotation web app is also now complete, a screenshot is included below that gives an impression of the UI. The web app is an essential last step in the data pipeline as it allows for the addition of POIs and roads to the site model, creating a richer dataset for autonomous technologies to use as opposed to just the collision model made available through the .stl file.

Final web app.


Construction Site Data Capturing

March 7
Data Collection

The drone has been repaired and is ready for flight. The team was fortunate enough to find a a 100m x 100m construction site. Mihai showed expert drone control and the team was able to collect two sets of data through manual flight. The combined data sets were run through the pipeline on AWS, creating outstanding results.

Point cloud orthographic photo.

The data is captured using a GoPro Hero5 as the camera issues are still preventing the team from being able to take pictures faster than one picture in 12 seconds. Bonus feature that the team got to test is the automatic landing of the drone when the battery levels are critically low. Although that cost the drone one of its standing carbon fiber legs, it's still in good shape and ready to fly next weekend for further data collection.

Website

Currently the website is going through design critiques and restructuring to improve user experience and the overall interface. The team is also thinking of creative ways to prepare media and data for the Symposium. We thought of a few fun things to show, come check us out on March 23rd at the Davis Center.


The Team Faces Some Hurdles

February 28
The Camera

Over the past week Hugo worked tirelessly to get the camera hardware to communicate through ROS and be able to capture images in flight. From discussions with E-Con Systems, the company that provided the camera solution, it should be able to take full resolution (13MP) stills at 9 frames per second. However, after receiving the source code for the camera mid-February, the camera can only do one frame every 12 seconds. Thus, this solution is deemed insufficient and alternatives have to be considered.

The Drone

A lot of progress made to prepare the drone for flight. Below is the set up for the tethered dynamic testing. We used this to tune parameters as well as get some experience controlling the drone.

Rishab tightening the rotor nuts before testing.

Here is some footage of our drone in action:


Second free flight after static and tethered dynamic testing lasted approximately 15 seconds until the drone crashed into the wall, breaking the carbon fiber rotor arm as well as one of the motors. Although a set back, the team is optimistic to have the drone up an running by the end of the week to start testing outside far away from walls that come out of nowhere.

Data from flight

Although short flight, the drone was able to collect 15 images which are ran through the data processing pipeline. The results are not expected to be good from a small amount of images as well as unsteady flight. However, despite prediction, a lot of features were captured. As can be seen from the point cloud below,

Point cloud from the first (indoor) flight test.

Communication with the Drone

Communicating with the drone for autonomous flight requires a mission to be loaded onto the drone. We can simulate full autonomy, however we are currently working on establishing a link with the drone through ROS to make it a reality. The video shows the drone take-off and completion of a mission set out by multiple waypoints around 8 Hickory St West, after visiting every waypoint the drone returns to launch point.



Valentine’s Day Update

February 14
Drone Hardware

The hardware platform is complete and ready for testing, with the exception of the camera. Manual flight testing will begin shortly and will expose any errors in calibration or setup that may have occurred.

Final Drone Hardware (Excluding camera)

Camera

The camera hardware is currently held back by issues the team has had with interfacing with it. The camera manufacturer has provided the team with the code required to interface with it, but there is work still yet to be done in order to fully integrate it within the drone’s software stack.

Mission Planner

This is the portion of the pipeline that plans the path that the drone should take given an address or GPS coordinates of the construction site location. So far, the mission planner code is able to generate a set of GPS coordinates that represent the drone’s proposed path. Further work is needed to determine the optimal spacing of points to achieve desired overlap between images. Additionally, the code will eventually need to interface with the drone control software in order to translate the set of points into actual drone movements.

Mission Planner

Drone Control

In the context of the project, “Drone Control” refers to the software that will run on the drone itself that instructs the flight control software about what to do (fly to waypoint, take picture, etc.). At the moment, the code runs inside a simulated environment. Some work is left to translate this work into code that will make the drone fly.


Progress on All Fronts

January 29

After a series of design meetings the previous week, the ground work was performed both from a hardware and software perspective. Point cloud filtering, website backend and frontend interfaces, drone flight simulation and hardware setup and bringup were all performed by the team members to ensure that no critical component was left unattended.

Hardware Setup/Bringup

After 3D printing the necessary brackets, the drone was assembled and connected to the provided base station running on a laptop and obtained from Ardupilot. This confirmed that all drone modules worked as expected. The next step is to connect the controller to the drone so that manual flight can be performed.

Point Cloud Filtering

To filter point clouds, the point cloud is first mapped to an image, where the (x,y) coordinates are mapped to (i,j) indices and the z-coordinate becomes the intensity. This method assumes that the (x,y) coordinates do not contain erroneous data, and that the filtering will be performed on the height data. This simplifies the filtering process while maintaining an accurate representation of the data.

Given a set resolution, the (x,y) coordinates of each point is mapped to an (i,j) index tuple, and once each point has been mapped, inpainting is applied to avoid image pixels having no associated data. Without inpainting, any filtering method would introduce more erroneous data since it would rely on non-existent data for smoothing, edge enhancement, outlier removal, etc.

The data is then filtered through a given method, which will be decided upon based on the obtained data from testing. The filtered image can then be mapped back to a point cloud for further data handling (exporting to STL, adding GPS data, etc.).

POI Website Interface

In order to add the roads and Points of Interest(POI) on the map, a web interface has been implemented where a user can click and connect different points over an orthophoto or the generated point cloud.

Website Prototype

As shown in the picture above, the interface still requires design work but feature-wise, the website is almost complete. The roads are saved and can be reload by accessing the website. Roads can also be added, deleted, and modified by the user at any point in the process.

Ardupilot Simulation

The ROS-enabled Ardupilot simulation was also setup during the week and a pre-built model of a drone was used to perform a given mission plan. From this initial testing, further work can be performed on the mission plan generation, as well as properly linking the simulation with real hardware.


Drone Hardware and Assembly

January 21

The second term (Jan - April 2018) begins with mechanical and hardware assembly of the drone kit. Progressing towards enabling manual flight using an RC control.

We printed the mounting bracket for the camera hardware and the design for the anti-vibration bed for the controller is complete. Both the mount and the bed dampen the vibration introduced by the drone’s motors to get rid of the noise that can corrupt our location measurements.

Lynxmotion HQuad500


Intro to Autonomappr

January 16

Today, according to Global Projects Database, large projects in the construction industry are late by an average of 20% than originally scheduled with some up to nine years. According to the same study the projects are also are over budget by an average 80% with some as high as 650% [1]. In addition, almost 50 workers are injured every minute of the workweek and 17 deaths occur per day in the USA, according to OSHA [2].

Despite technological improvements, many projects are over-budget, over-time and dangerous. Autonomappr aims to provide an accurate site model with labelled point of interests to effectively increase efficiency and mitigate safety concerns by enabling autonomous technology on a construction site. The project objectives are outlined below.

Autonomous Data Gathering

Build a 3D mesh representation of the construction site

Enable road and point of interest marking

Generate approximate volume measurements

Make results available through an API

The team defined constraints and criteria selected to be successful and competitive with the current market.

Constraints
Cost Physical platofrm costs less than $3000
Mass Physical platofrm weighs less than 6kg
Accuracy 3D point cloud is accurate within 15cm
Speed Able to fully refresh site model twice per day (for area 100x100m)
Robustness Able to operate on all terrains typically found on a construction site
Autonomy Gathering of raw terrain data must be autonomous
Criteria
UX Enable end users to label roads and points of interest on the 3D model
Stretch Goal Automatic road and point of interest detection through semantic segmentation
Resolution Exceed a point cloud density of 100 points per square meter

Since September of 2017, the project progressed through initial ideas to a final design for mechanical, hardware and software components. The solution revolves around a multi-rotor drone that inspects the construction site from above, capturing images and topology data.

The data processing pipeline turns the raw data into a usable site model that is useful for end users and other autonomous robots onsite. The team is on schedule to present at the Capstone Symposium in March of 2018.

[1] R. Agarwal, S. Chandrasekaran and M. Sridhar, "Capital Projects & Infrastructure - Imagining construction's digital future," McKinsey&Company, June 2016. [Online]. Available: https://www.mckinsey.com/industries/capital-projects-and- infrastructure/our-insights/imagining-constructions-digital-future. [Accessed 16 October 2017]

[2] J. Moody, "A LOOK AT THE ROI OF SAFETY," 12 September 2017. [Online]. Available: https://www.safetyservicescompany.com/topic/uncategorized/look-roi- safety/. [Accessed 16 October 2017].