Starting robotics - Driving scuttle with ROS - Gazebo simulation

Wednesday, June 1, 2022 | Posted in

After getting familiar with ROS, the next step was to get the navigation stack working in Gazebo for the SCUTTLE robot. Fortunately, the SCUTTLE developers had already created a number of ROS packages containing the scuttle model, the startup scripts and the driver code. All the bits you need to drive SCUTTLE around using ROS.

The first challenge is to drive the SCUTTLE model around in Gazebo using the keyboard. This was easily achieved using Gazebo and RViz using WSL2, but it was a bit slow.

The next challenge was to make the virtual SCUTTLE drive to a specific point using a custom ROS node. While doing the ROS for beginners I course I wrote some code to do a move-to-goal for the turtlesim virtual robot. The code was updated and then used for the SCUTTLE robot. The result is a robot that moves to a goal, mission accomplished!

The final challenge, for now at least, is to get the navigation stack to work for SCUTTLE. In order to do so a few new packages needed to be created. The packages I created are

The scuttle_slam and scuttle_navigation packages are based on the similar packages for turtlebot3 with adjustments so that the configuration matches SCUTTLE's performance.

For navigation in ROS1, you need two different types of path planners and a map for each planner. The first type is the global planner, which uses a map to determine the fastest path from the current location to the goal location. The second type, called the local planner, navigates the robot to the goal by trying to follow the path created by the global planner. The path followed by the local planner may deviate from the global path due to previously unknown obstacles and limitations of the robot, e.g. the ability to follow a turn. The map for each planner is known as a costmap which indicates which part of the surroundings are occupied by obstacles and which parts can be navigated. After configuring the planners and the costmaps, the navigation worked. I could use RViz to set a point on the map and the virtual SCUTTLE would navigate to that location automatically. Success!

Scuttle navigating in RViz
SCUTTLE robot navigating a room in RViz

Now, not everything was fine. The first issue is some weird behaviour with the DWA local planner. Once the robot is moving the local planner mostly does a good job. However, when starting a path it takes a while for the local planner to pick up the global path. In fact, the DWA planner doesn't seem to accept the global plan until after a rotate recovery has taken place. So far, I haven't found a solution to this problem.

The second issue is that, in some cases, the navigation stack fails to find a path out of a narrow hallway or though a narrow door. In general, this happens when exploring a location, e.g. using the explore_lite package. It seems that the algorithm can't find a turn that will rotate the robot in the available space, even though scuttle is able to perform in-place rotations. At the start of a navigation exercise, in-place rotations are gladly used. However once the robot is on the move, the algorithm doesn't seem to apply in-place rotations anymore.

Finally, you have to keep in mind that the default planners for ROS are path planners. This means that they plan a path from the start to the destination. These paths, however, only describe the direction a robot should take at a given location. They don't describe velocity or acceleration. Only describing the direction can generate paths with abrupt turns that force the robot to slow down significantly. Using a trajectory planner, which at least prescribes velocities, makes for a smoother experience for robot and cargo.

Starting robotics - Learning Robot Operating System (ROS)

Tuesday, May 3, 2022 | Posted in

As part of my journey into robotics I have been working on updating my SCUTTLE robot to use the Robot Operating System (ROS). ROS provides a number of different things that make robot development much easier. The main items are a middleware layer for communication between different parts of the robot, hardware abstractions for different sensors, motors and controllers, device drivers and many other libraries and packages.

The main benefit of using ROS is that it provides a lot of integrations and functionality that you can quickly use. On the other hand the drawback that comes with all of this is that the learning curve for ROS is very steep. The documentation is pretty good and so are the tutorials, however there are a lot of different parts in ROS, which makes for a lot of interesting ways to get confused. So to speed up my progress with ROS I decided to do the ROS for beginners I and II courses on Udemy. These courses were very helpful to reduce the learning curve for ROS and quickly get me familiar with ROS.

Scuttle in Gazebo
SCUTTLE robot in Gazebo

This post won't explain how ROS works, there are many, many, many tutorials out on the web that will do a far better job than I can. However I do want to share some of the things I learned from working with ROS.

The first thing to note is the operating system on which you want to run ROS. ROS is developed to be run on Ubuntu. My home PC runs on the Windows Operating System. ROS 1 wasn't designed to run directly on Windows (ROS2 will be able to) but there are several ways to run it. First you can run ROS Noetic straight on Windows using Robostack. This uses the Conda package manager and provides packages for all operating systems. I found that this works moderately well, there are a number of packages missing and occasionally things error out. This approach works well for simple learning exercises but may yet not be suitable for large ROS applications.

A second approach is to run ROS on WSL2. This is able to run the Ubuntu native packages so you can run all parts of ROS and with the help of an XServer like VcXsrv you can even run all the graphical tools. One thing to keep in mind if you use WSL is that networking may cause problems if you run ROS distributed over more than one computing device, e.g. a laptop and a physical robot. With WSL there is no easy way to expose WSL applications to uninitiated network connections, i.e. a request started from inside WSL works, but a request started from outside WSL won't be able to connect. This is important because ROS nodes need to be able to communicate with each other freely. The result will be that the nodes on the WSL side will seem to be connected and functional while the other nodes won't be able to send messages to the WSL nodes.

The final approach to running ROS is to create an Ubuntu VM or physical machine. In this case as long as the machine is reachable over the network for other compute devices, it is possible to run ROS distributed over the network. This is the way I currently run ROS.

Scuttle in RViz
SCUTTLE robot in RViz

Once you have a working ROS installation the next thing you'll find out is that ROS configurations can be difficult to get right, especially when you're working with a physical robot where visibility of what is going on may not be the best. There are a number of useful tools available to provide insights into what is going on with your robot.

The first tool is Gazebo which provides a simulated environment for ROS robots. The simulation is based on a physics engine with good accuracy of real world physics. It also provides models for sensors, like LIDAR and cameras, and sensor noise to simulate real-world sensor behaviour. Having a simulated environment allows you to repeat behaviours many times in the same way in rapid succession. Having a way to easily repeat behaviours and control the environment means that you can quickly test and debug specific behaviours, something which can be much more difficult with a physical robot.

The second tool, RViz, provides visualization of the environment of the robot and how the robot perceives that environment. It allows you to visualize what the robot can ‘see’. RViz works by subscribing to the different message topics that are available. This means it works both for simulated robots (using Gazebo) and physical robots.

The final tool worth discussing is Foxglove studio which also provides insight into the data that the robot generates, both from sensors but also in the form of messages sent between the different components of the robot. One of the nice features of Foxglove is that you can make plots with values provided by messages. For instance you can plot the velocity components of a Twist message. This is useful to compare requested velocities compared to actual achieved velocities. Another great feature of Foxglove is that it is able to display the ROS logs and it also allows you to filter and search these logs. Given that ROS logs can quickly become large the ability to filter is very useful.

Scuttle in RViz with LIDAR overlay
SCUTTLE robot in RViz with LIDAR overlay

When working with a mobile robot, like I am, getting the robot to navigate a space is often one of the first achievable goals. The navigation stack in ROS provides a lot of the basic capabilities to get started with robot navigation in a reasonable time span. Do note however that the navigation stack in ROS is fairly large and has a lot of different configuration options so it is wise to set some time aside for learning about the different options. I'll talk about navigating with SCUTTLE in a future post.

As mentioned I started learning ROS1 with Udemy. My goal for learning ROS was to use it for navigation with my SCUTTLE robot, more on that in a future post. Once I manage to get navigation working for SCUTTLE I plan to start adding different sensors. Finally I want to enable task planning for SCUTTLE, e.g. tasks like “drive to the living room and collect my coffee cup and bring it back to me”.

Another part of my plans is to upgrade to using ROS2. ROS1 end-of-life is 2025, which is only 3 years away, and additionally ROS2 has a more modern stack with python 3, better communication security, an improved navigation stack and more active development. More on this will follow in a future post once I have upgraded my robot to ROS2

Starting robotics with the SCUTTLE robot

Friday, March 18, 2022 | Posted in

As mentioned in my last post I have started tinkering with mobile robots. My current goal is to build an outdoor capable autonomous mobile robot. The first problem I have to solve in order to move towards my goal is that I know a decent amount about software, a reasonable amount about structures and mechanics and very little about electronics. Oh and I know nothing about the robotics algorithms like how navigation works, the fact that robots may have a hard time figuring out where they are and that decision making is hard for robots.

So in order to not have to learn all the things at the same time I decided it would be sensible to start off buying a kit that I could assemble and learn to work with. The basic requirements were

  • Something that didn't require me to solder electronics or 3d print parts, because I have neither of those tools, yet ...
  • Capable of actually carrying a load of some sort. Most robot kits are fun platforms to play with but other than driving around they're not actually capable of carrying things. I want my robot to be able to carry things for me.
  • With accessible hardware and software so that I could modify and extend it.
  • Affordable, because money is still finite
Scuttle assembled
SCUTTLE robot assembled

After a little bit of searching I decided to buy the SCUTTLE robot kit. The SCUTTLE kit is an open source kit for which all the build information is available online, from the 3D drawings to the material BOM. Additionally there is a lot of sample code that makes it easy to get going with the robot. There are code samples that allow you to drive the robot with a gamepad or by putting it in follow mode where it follows a coloured object. Note that when you pick a coloured object apparently orange is the best colour because of the colour difference with the surroundings. In my case initially I picked a dark red object in a poorly lit environment with lots of other variations of red around. You can probably imagine how well that went ...… [*]

Assembly of the SCUTTLE robot is pretty easy, it consists of aluminium T-slot lengths, some 3D printed parts and some electronics parts. The T-slot lengths are fastened with angle brackets and the 3D printed parts bolt to the T-slot lengths. The kit I bought only required connecting electronic parts with connectors, no soldering required. If you build a SCUTTLE from scratch there is some soldering to be done.

Once you have assembled your SCUTTLE you can test the functionality by using the code samples to verify the encoders and the motors. Note that it is wise to review your cabling before turning anything on because it is possible to connect some of the electronics incorrectly. I ended up breaking my Raspberry Pi, quite possibly by connecting the encoders backwards or something similar.

Scuttle in RViz
SCUTTLE in RViz

After verifying that the motors rotate in the correct direction you can try controlling the robot via the gamepad and drive it around the house.

Currently I'm working to update my SCUTTLE with the ROS software. Currently I'm testing with ROS noetic but I am looking to eventually switch to using ROS2 as it seems to have a more flexible navigation stack. More on that in a future post.

I'm also planning to add some sensors to my SCUTTLE to make it a bit more autonomous. The first plan is to add a bumper that will tell the robot if it has hit something. I have picked up some contact switches but am still thinking about the design for the bumper. Later on I want to add sonar, Time of Flight (ToF) sensors and potentially cameras as well.

[*] SCUTTLE drove straight at the red coloured couch instead of following the red object I wanted it to follow

Building an autonomous mobile robot - But why?

Monday, March 7, 2022 | Posted in

Over the last decade I have been developing software of all kinds. I have coded both for work and for my own learning and entertainment. The majority of the coding I did over that period was code that lived only in the virtual world. Things from from numerical simulations to building software infrastructure.

While I have always greatly enjoyed writing code and learning new skills I have found myself being less and less interested in writing more code that only lives in a virtual world. It feels like there is no purpose for this code, like something is missing.

The main thing I feel is missing is interaction with the physical world. The ability to see the effects of the code when it is executed, to see things move and react to the world. While in university I studied Aerospace engineering and Mechanical engineering. Two disciplines heavily involved with real world physics. The fact that a design lives in the physical world adds all kinds of additional constraints and behaviours. Some of which the result of interesting physical behaviour and others the result of the continuous and analogue nature of the physical world. All of this makes the engineering problems more interesting and challenging.

One of the ways I have been incorporating making physical things is by working with timber. I've build several bits of furniture and generally greatly enjoy working with wood. However working with wood misses the technological side that I do also enjoy.

A domain that combines both the physical and virtual worlds with large amounts of technology that recently caught my eye is the world of robotics. As robotics is a combination of many different fields there is a lot to discover and learn about. Even for the design of a simple robot you will have to deal with fields like structural mechanics, electronics, software, perception, machine intelligence etc.. As a bonus thanks to the availability of relatively cheap electronics and structural components it is possible to build an interesting robot yourself.

My current goal is to build an autonomous mobile robot that is capable of navigating outdoor spaces while carrying some kind of cargo from one location to another. Before I get to designing and building a robot from the ground up there are a lot of things to learn. In order to speed up the learning process I picked up a SCUTTLE robot kit and started learning about ROS. More on that will follow in a future post.

Software development pipeline - Security

Sunday, November 8, 2020 | Posted in

One of the final chapters in the description of the development pipeline deals with security. In this case I specifically mean the security of the pipeline and the underlying infrastructure, not the security of the applications which are created using the pipeline.

The first question is why should you care about the security of the pipeline? After all developers use the development pipelines via secured networks and their access permissions will be set at the source control level. Additionally high trust levels exist between the pipeline processes, the infrastructure and the source repository. In general this leads to the security of the pipeline being placed lower on the priority list.

Which issues could you run into if you deem the security of the pipeline less critical? One argument comes from pen tests which show that CI/CD systems are a great way into corporate networks. Additionally there have been a number of attacks aimed at distributing malicious code through trusted software packages. These so called supply chain attacks try to compromise the user by inserting malicious code in third-party dependencies, i.e. the source code supply chain.

In essence the problem comes down to the fact that the build pipeline and its associated infrastructure have access to many different systems and resources which are normally not easily accessible for its users. This makes your pipeline a target for malicious actors who could abuse some of the following states for their own purposes.

  • The development pipeline runs all tasks with the same user account on the executors and thereby the same permissions. Obviously the worst case scenario would be running as an administrator.
  • Multiple pipeline invocations executed on a single machine, either in parallel or sequential, which allows a task in a pipeline to access the workspace of another pipeline. This ability can for instance be used to by-pass access controls on source code.
  • Downloading packages directly from external package repositories, e.g. NPM or Docker.
  • Direct access to the internet, which allows downloading of malicious code and uploading of artefacts to undesired locations.
  • The development pipeline has the ability to update or overwrite existing artefacts.
  • The executors have direct access to different resources that normal pipeline users don't have access to. Specifically if the same infrastructure is used to build artefacts and to deploy them to the production environment.

One of the problems with securing the development pipeline is that all the actions mentioned above are in one way or another required for the pipeline to function, after all the pipeline needs to be able to build and distribute artefacts. The follow up question then becomes can you distinguish between normal use and malicious use?

It turns out that distinguishing that this will be difficult because both forms of actions are essentially the same, they both use the development pipeline for its intended purpose. So then in order to prevent malicious use put up as many barriers to malicious use as possible, aka defence in depth. The following are a number of possible ways to add barriers:

  • Grant the minimal possible permissions for the executors, both on the executor and from the executor to the external resources. It is better to run the pipeline actions as a local user on the executor, rather than using a domain user. Grant permissions to a specific resource to the action that interacts with the resource.
  • Execute a single pipeline per executor and never reuse the executor.
  • Limit network connections to and from executors. In general executors do not need internet access, save for a few pre-approved sites, .e.g. an artefact storage. There is also very little reason for executors to connect to each other, especially if executors are short lived.
  • Pull packages, e.g. NPM or Docker, only from an internal feed. Additions to the internal feed are made after the specific package has been reviewed.
  • The artefacts created with the pipeline should be tracked so that you know the origin, creation time, storage locations and other data that can help identity an exact instance of an artefact. Under ideal circumstances you would know exactly which sources and packages were used to create the artefact as well.
  • Artefacts should be immutable and never be allowed to overwritten.
  • Do not use the executors that perform builds for deployments, use a set of executors that only have deployment permissions but no permissions to source control etc..

Beyond these changes there are many other ways to reduce the attack surface as documented in the security literature. In the end the goal of this post is more to point out that the security of the development pipeline is important, rather than providing ways to make a pipeline more secure. The exact solutions for pipeline security depend very heavily on the way the pipeline is constructed and what other forms of security validation have been placed around the pipeline.