Jashan Bhoora | blog

Posted: 14/10/2018

I realise it has been 2 years since I last updated my blog. The main reason for this is that I simply had to stop working on the extracurricular projects I used to enjoy, as I left CC and my final years at uni became more intense. Finally, 5 years after I started, it has all come to an end! It has been an unbelievable ride, with many friends and memories made along the way. I'm happy to say that I will be returning to CC full-time. With graduation day coming up in the next few months (and me actually having nothing to do for a bit!) I thought I'd take a moment to write about my final year project. (Effectively, this post is a cut-down version of my final report.)

The final year project is the capstone of the Masters degree at York. Supervisors suggest various topics of potential research within their specialities, and we are given the choice of which one we may want to pursue. We are also able to define our own projects. If you've read my previous posts, you probably won't be too surprised to hear that my project was self-defined and revolved around RepRap.

The printing process that RepRap printers most commonly use is called Fused Filament Fabrication (or just FFF). The filament of plastic is heated to its melting point, then drawn in submillimetre layers to build up an object. This process is well-defined, with extremely high resolutions possible. However, the current process is completely open-loop. There are numerous things that can go wrong with a print as it's printing (from bed detachment to nozzle blockages). Since users tend not to watch prints that will take several hours to complete, an undetected print failure equates to time, plastic and money wasted.

So, my project was to create a system that would analyse the output of a RepRap FFF 3D printer as it prints. The system would constantly be checking whether the output of the printer is what it should be - if it isn't, then it should stop the print and alert the user. To help this system on its way, the STL model being printed will be given to the system so that it has an idea of what to expect. Additionally, the system would also know how much of the print (in millimetres) had been printed (i.e. how high the print head is above the bed).

At this point we knew what the project wanted to achieve; (my supervisor and) I needed to work out a viable way of doing it. I'm going to skip all of the research and alternative solutions we came up with and focus mainly on the one I took through to implementation.

Fundamentally, the problem was approached as a computer vision problem. We had an object being formed in 3D that we wanted to accurately compare to a virtual 3D representation. Additionally (and in line with RepRap) we had some non-functional requirements for the project that aimed to make it as accessible and widely applicable to different RepRap printers as possible. This meant building the system with widely available parts and with the potential to be cheap to implement.

First, we needed a way to capture the model being printed. The obvious two options are cameras and 3D scanners. Given the simplicity, cost and prevailance of cameras against the niche of 3D scanners, we opted for a solution that would make use of multiple cameras (we ended up using 3 Logitech C920 webcams). In doing so, this turned the problem into a 3D reconstruction problem - we would take images of the object as it printed from multiple viewpoints at the same time and use these to reconstruct a 3D representation to compare to the STL.

To make the process considerably simpler, I built a delta printer to use as the main stage for the project. The most important reason for this was that the bed of a delta printer doesn't move, meaning that cameras trained on the centre of the print/bed don't need to track the print object as it moves. With the printer assembled and calibrated, I used it to print off some arms to hold the webcams around the print bed. At this stage the hardware for the project is complete and the problem becomes more software oriented. The main library used for this project was OpenCV, which provides a multitude of established computer vision algorithms.

To summarise, the final hardware setup for this project was:

  • The printer - Anycubic Kossel Plus Linear
  • Control board - Duet3D Ethernet (I decided to move away from cheaper RAMPS setups, and this turned out to be an extremely good decision.)
  • CV processing platform - Raspberry Pi 3
  • Cameras - 3x Logitech C920 + powered USB hub

The final processing pipeline has a number of steps. Starting from scratch with a ready-to-go 3D printer, a user will need to:

  1. Calibrate the cameras themselves - the intrinsic parameters (a one-off operation, necessary only if the cameras are changed)
  2. Calibrate the camera rig with the printer - the extrinsic parameters (a one-off operation, necessary only when the cameras are moved)
Once this is done, the system has enough information to translate any point in 3D printer space to a pixel in each of the images from the calibrated cameras.

Intrinsic Parameter Calibration

This step calculates a set of parameters that descibes the mathematical transformation a camera applies to a scene when it captures it and stores it as a 2D array pixels. This is a common operation in computer vision, and as such OpenCV has a process for this built in. It consists of printing out a checkboard of know dimensions, then taking images of it with the camera being calibrated from multiple viewpoints. This allows the optical effects that the camera applies to the known flat pattern to be measured. Note that if identical cameras are being used then this can be done with one camera and the resulting parameters can be used on all 3. (There may be minute differences, but these should be negligible).

Extrinsic Parameter Calibration

With the individual camera properties known (and the cameras now mounted as pictured above), we are in a position to calibrate the camera rig to a specific printer. The end result of this is that, for each camera, we are able to say that a point (x,y,z) in 3D (printer) space corresponds to a point (u,v) in the 2D image from the camera. Unfortunately, this is the most tedious part of calibration, since for the system to be able to apply this 2D-3D point correspondence in a general manner it needs to be done manually first. This requires printing a model with many obvious features and manually matching them between each cameras viewpoint and the STL itself (which is positioned as it was in the slicer that generated the print, as opposed to the coordinate space in which it was created in CAD).

To facilitate this, I created program that takes an STL model and 3 images, and allows a mapping to be generated between a pixel in each image and a 3D coordinate on the STL. STLs are nothing more than a series of connected trianglular faces. To simplify this clicking process, only the corners of the triangles in the mesh can be mapped to. The interface is extremely basic, and requires a lot of screen space to use optimally! But, after performing this process, we can now calculate the transformation matrices.

We can confirm that the calibration is accurate by projecting the 3D points that were clicked into the image using the matrices, and confirming that the resulting points are close to the pixels that were clicked in the first place.

Silhouette Projection

At this stage, standard camera calibration is complete, and we are now performing application specific operations. Now lets assume that we have printed out an object for the purpose of testing the error detection. It is still attached to the bed and the print head has been moved so that none of the cameras' views are obstructed. With this setup, we can discuss two approaches to actually detect the error of the print: Silhouette Consistency Checking and 3D Reconstruction.

Silhouette Checking

To implement a simple error checking system, we can consider the surface the STL represents. If we project every point it contains into the viewpoint images, we will create a silhouette of where a correctly printed model should be. For this project, we call this the "expected silhouette".

We then want to compare this to the silhouette of what has actually been printed - i.e. the "actual silhouette". This means that we need to identify and extract the printed object in each image. I played with a variety of segmentation algorithms in already built into OpenCV to see which one best fitted this setup. In the end, I went with GrabCut. In doing so, I also came up with a novel initialisation process that uses information from the previous steps to tell GrabCut roughly where the printed object should be. This is useful because it means that the user doesn't have to do this manually (which is how GrabCut is normally used) and it means that the process should generalise to any STL relatively well (though this is yet to be thoroughly tested).

Once we have both silhouettes, we simpily compare them pixel my pixel, and produce a percentage error reading based on how many pixels match.

3D Reconstruction

Silhouette Consistency Checking is simple to understand and implement, and for simple, "obvious", non-symmetric shapes with no surface detail should produce a meaningful estimate of error in a print. Of course, the majority of prints people use 3D printers for are far from simple, and so a more sophisticated approach might be better suited to the problem.

If we project every point in calibrated 3D space back into the silhouettes, we can reconstruct the model that was used to create the silhouettes. We can then compare the reconstructed model with the STL model to get a much better error estimation. This process is generally called Shape-from-Silhouette or SfS. To implement it, we generate a 3D boolean matrix to represent a voxel volume, then proceed to project every voxel into each silhouette image to check which voxels lie inside the silhouettes. Any voxel that isn't in the silhouette is removed (or "carved") from the voxel volume.

To be continued...

At this point, I had the beginnings of a working error detection pipeline. Unfortunately, the report submission date was looming, so I had to stop implementation. And of course I have since left university, graduated, moved to Cambridge and starting at CC again! I fully intended to buy the printer, rebuild and complete the project when I am able to, so stay tuned. For now though, I have made the project code available on GitLab, so if you're interested then feel free to take a look: Print Patrol. Everything that I've described here is implemented there, though documentation is still lacking. If you're interested in reading my full report (!), you can find it here. (I know I haven't shown an evaluation in this post - there is one in the full report!)

I'd be very excited to hear from anyone that is interested in this, whether you are a Reprapper or not - please get in touch! If I haven't explained anything clearly enough in this post (which I know is a very condensed version) or as I write the documentation on GitLab, I'd be happy to add more detail if asked. Thanks for reading!