My journey to understand rust-lang
I tinkered with rust-lang a good while ago… probably about 5 or 6 years ago. Nifty language but it didn’t stick. Cargo was just finding its legs. I couldn’t see why I would want to use it as a language when I already know how to use C++ safely & correctly. Especially since the newer C++ versions have really gained a lot of functionality.
A close friend shared why he preferred Rust to languages like C and C++. I trust him so I decided to take another look … sometime. That was a year or so ago. He recently left his long-held position at Google for oxide.computer which made me think again about Rust. The final straw was a discussion with a co-worker about recent changes to the Python language in which they are importing features from other languages — notably Rust, and JavaScript.
First thoughts
I started out the way that I start with any new programming language. I created a directory named rust
in ~/Source
and opened https://www.rust-lang.org/tools/install. I have a few general rules about how I manage my local computing environment:
- I don’t install things outside of my home directory
- I generally install things into
~/opt
when possible - I don’t pipe the output of
curl
into a shell - I don’t permit installers to modify my shell environment
The usage of .local
hasn’t taken hold in my brain yet but those are my general ideals. Installation of rust by retrieving a shell script and piping it through /bin/sh
bothered me. Instead, I downloaded the installation script, reviewed it, and then ran it. Nothing too surprising in there but I did note that I could tell it to leave my shell environment alone so that was worth it alone.
Running the installer was very quick and I ended up with two new “dot directories” in my home directory… a personal dislike of mine but I’ll deal with it. I added ~/.cargo/bin
to my $PATH
and moved on to reading “The Rust Programming Language” by running rustup doc
. This is the first place that I paused and thought — that’s nifty!
rustup doc
launches a web browser (Safari in my case) to view a local copy of the book. A very nice touch indeed. It guarantees that the version that I am reading is indeed the appropriate version for the rust environment that I have on my laptop. More importantly, I have a decent amount of stuff to read and learn from when I am not connected to the Internet. Most importantly, I did not ask for this, it is simply included! This spoke to the Python programmer in me that likes “batteries included” languages.
Spent the rest of the day by trying to write a complete application with barely any language knowledge. I managed to get through the first two chapters of the Rust book before striking off to write an application. Why not?
Decided to implement an HTTP endpoint that receives a webhook notification from Gitlab and stores some of the details in a PostgreSQL database. I’ve been doing a lot with gitlab’s HTTP API lately and know the ins-and-outs of processing JSON documents and storing data in PostgreSQL so it seemed like something that was achievable.
Creating a webhook listener
Decided to use the Rocket web framework because it’s what Google returned. The first rough edge that I hit is that Rocket requires that you use the nightly version of the Rust compiler. I ran into this immediately on the first “cargo build”
$ cargo buildUpdating crates.io index
Downloaded httparse v1.3.6
Downloaded traitobject v0.1.0
Downloaded version_check v0.9.3
Compiling unicase v1.4.2
Compiling indexmap v1.6.2
Compiling base64 v0.9.3
error: failed to run custom build command for `pear_codegen v0.1.4`Caused by:
process didn't exit successfully: `/Users/daveshawley/Source/rust/webhook-listener/target/debug/build/pear_codegen-dff041093c6dfe2b/build-script-build` (exit code: 101)
--- stderr
Error: Pear requires a 'dev' or 'nightly' version of rustc.
Installed version: 1.51.0 (2021-03-23)
Minimum required: 1.31.0-nightly (2018-10-05)
thread 'main' panicked at 'Aborting compilation due to incompatible compiler.', /Users/daveshawley/.cargo/registry/src/github.com-1ecc6299db9ec823/pear_codegen-0.1.4/build.rs:24:13
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
warning: build failed, waiting for other jobs to finish...
error: build failed
zsh: exit 101 cargo build
I glazed over the line in the quickstart that mentioned the requirement on the nightly version. Their Getting Started document pointed me to running rustup override set nightly
which sets the version of the toolchain to use for a single project. This seemed similar to what I would use a virtual environment for in Python. I ran the command, noted that it only affected the current project and moved on. I made a note to come back and look at how this works in more detail later. I scanned through the Getting Started document to get a feel for how Rocket applications are structured and found it similar enough to other frameworks that I have used. I wrote the following little application and ran it.
#![feature(proc_macro_hygiene, decl_macro)]#[macro_use]
extern crate rocket;#[post("/pipeline")]
fn process_pipeline() -> &'static str {
"Yippee!"
}fn main() {
rocket::ignite()
.mount("/", routes![process_pipeline])
.launch();
}
It worked as advertised. The code compiled and ran pretty quickly. It did have to compile the 3rd party stuff the first time but that was expected. I shot a POST request off using curl http://127.0.0.1:8000/pipeline
and it failed to connect. That surprised me. The output from the run told me it was running on port 8000… 🤔
It turns out that this is one of the cases where localhost
and 127.0.0.1
are indeed different things. Seems like Rocket binds to IPv6 only by default… also interesting. Sent another POST to localhost
this time and it worked. crisis averted
A few other things that I made note of:
- “Configured for development” made me think that Rocket supported a more complex configuration mechanism (spoiler: it does)
- They covered the knobs that I expect from a good HTTP server
- They use emojis in their console output :-(
First impression… very nice!
I’ve worked with a lot of HTTP frameworks in Python and this was a spectacular first foray. Concise code though I don’t understand all of it yet but the amount of code and time from first looking at https://rocket.rs to having a running application was surprisingly fast. Less than 30 minutes and 15 lines of code.
Time to figure out how to process request data… browsed through the documentation and found the “Requests” section. Eww… a lot of time spent talking about url-form-encoded data. Though it was really talking about how data is represented and how Rocket (or more correctly serde) transforms whatever is coming in to native Rust data structures. That works for me!
The payload for Gitlab Pipeline events is documented in the Webhooks section of their API. I couldn’t tell from a quick read of the Rocket/serde documentation whether I would have to represent the entire structure or not. I decided to try a partial representation and see where that got me. After a few missed attempts and some doc reading, I got a webhook notification to print out to the console! I learned that serde uses the structs that you include as a filter on the JSON body — in other words, anything in the body that you don’t explicitly mention is discarded. This is an awesome approach that more libraries should implement. If I recall correctly, the JSON implementation in go-lang does the same.
Since everything is in a working state, I decided to commit my work before moving on. By the way, cargo new
also creates and initializes a git repo for you!
I continued on to implement the webhook listener in an incredibly short amount of time for a new language. By the end of a day of working at it during downtime (waiting for builds, in meetings, etc.), I had a working example that received a JSON payload, verified that it was received the appropriate secret key header, and stored details in a local PostgreSQL instance.
For a new language, getting this working in a single day that started with reading the first two chapters of “The Rust Programming Language” with my morning coffee is pretty awesome. Mind you that I have been programming professionally for 25 years at this point so I am far from a novice programmer. I also have experience in a pile of languages so picking up a new one isn’t too difficult.
But still… starting the day with reading about a language and having a web service that receives & validates a JSON blob before storing it in a database by the end of the day is just plain awesome.
After one day…
The result of playing for a day is that I can definitely see the appeal of the Rust environment. They have walked a nice balance between opinionated and providing what you need to work effectively. Here’s what really works right now.
Packaging
This is one of the places that Python has really struggled and I can understand why cargo
is so appealing. Having a fully-featured project management utility included in the standard distribution removes the need to find one that fits into your tool belt. I don’t remember cargo
being so fully featured when I looked at Rust several years ago.
Python missed this to some degree. Instead of building in an opinionated packaging and source management solution, they let the community create solutions with the expectation that one would be pulled back into the core and offered as a “battery”. The Packaging SIG is working to do this based on something that uses a data-driven solution that offers well-known build hooks. It could work and result in something that looks similar to what “cargo build” does. I don’t think that is enough though…
Cargo bundles a lot of functionality into one place:
- dependency management
- build & run processing
- packaging & distribution
- project creation
- environment management
Maybe I will have another look at poetry for my Python development environment 🤔
When you couple the utility of cargo
with the code style enforcement of rustfmt
, you remove so much of the opinion without really getting in the way of expression and the craftsmanship aspect of programming. I’m really starting to like Rust.
Highly opinionated tools
The Rust ecosystem includes a fairly complete set of opinionated tools.
cargo
takes care of creating projects, managing dependencies, compiling, running, and distributingrustup
manages your Rust environment and provides the ability to run different versions in different projectsrustfmt
reformats your code into a standard style
Having this set of tools available immediately means that I don’t have to figure out whether I like to have my environment in the local directory or managed under my home directory. I don’t have to decide between pure Python project management tools (setuptools) or data-driven tools (flit, poetry). I don’t have to worry about keeping my toolchain up to date or manually install and select specific versions.
I haven’t delved into testing, profiling, or linting code yet. I’m not sure if these are included in the standard utility set or not. I’m assuming that they are and I will find them later on.
Don’t get me wrong, there is another side to having a feature-filled toolchain available immediately. The lack of the a toolchain in Python meant that I had to learn about each aspect of Python development when I needed it. For example, when I needed to pull in packages outside of the Standard Library, I had to figure out how to install them. At the time there were only two options: use a OS package-management utility or use easy_install
. Neither really worked well and both required a bit of learning. I watched as setuptools
took hold and learned a lot from it. The bonus of this is that I learned a lot about the underpinnings in the Standard Library that made all of this possible.
I know absolutely nothing about how cargo
does what it does today. I may never need this knowledge but I suspect that there will come a day that I have to dig in and learn the details. Superficially, I can see that the toolchains are installed in my home directory and libraries are installed in project directories. This is actually how I manage my Python environments so 👍
It will be interesting to see how the rest of my journey goes. So far, I’m pretty happy with Rust. I like the default toolchain and have given up a few rights that have been previously been sacred cows:
- I’m willing to concede to letting
rustfmt
do its thing… no need to form opinions about yet another syntax. It doesn’t hurt that it isn’t too far off from what I would have done in C. - I’m willing to let
cargo
manage the project scaffolding. Project scaffolding has been one of my most-painted bikesheds in Python. In retrospect,cargo
got it right by being absolutely non-intrusive and minimal. I didn’t feel the need to tinker.
Another day, another challenge
For my second foray, I decided to rewrite the gitlab application using GitHub as the source of notifications this time. The largest difference between them is that gitlab notifications are secured using a shared token passed in a header where GitLab uses a HMAC-SHA256 of the request body with a shared key. The serde library hides the serialization details for request bodies in my implementation. I implemented the header validation for Gitlab using a request guard with the shared key being hard-coded in the application.
When I implemented the guard, I cut myself on another sharp edge of the Rust ecosystem. Rocket makes a point of using nightly builds only and staying on the bleeding edge. I haven’t figured out why this is, but it is certainly the case. I encountered a few compiler panics when working through the request guard implementation. Here is the example from the Rocket documentation:
use rocket::request::{self, Request, FromRequest};
#[rocket::async_trait]
impl<'r> FromRequest<'r> for MyType {
type Error = MyError;
async fn from_request(req: &'r Request<'_>) -> request::Outcome<Self, Self::Error> {
/* .. */
}
}
It doesn’t look too bad. So I tried that and ended up with a compiler panic. After spending a bit of time comparing code snippets and looking around the Internet, I decided to just look at the local source code. I find myself digging into the source code far too often in Python and was hoping that I might not have to do that for Rust but alas… Here is what the comment looks like in request/from_request.rs
impl<'a, 'r> FromRequest<'a, 'r> for ApiKey {
type Error = ApiKeyError;
fn from_request(request: &'a Request<'r>) -> request::Outcome<Self, Self::Error> {
/* .. */
}
}
Well … that is quite different. No need for async_trait
and an extra lifetime marker (still have to learn what those are 😮). Once I reimplemented by code using the correct type signatures, it went really quickly.
Use cargo doc --open
to view the correct documentation for libraries. Lesson learned.
Now let’s talk about moving from a simple header to a header that contains a digest of the request body. This was another fairly rough edge that took a bit of rethinking how to process requests. My instinct was:
This should be pretty easy. Take the raw request body, calculate the digest, compare and fail if they don’t match.
The reality is that Rocket uses serde to automatically deserialize the request body and the raw body is no longer available. My handler looked something like the following with deserialization being handled completely by Rocket.
#[post("/pipeline", data = "<notification>")]
fn process_pipeline(notification: Json<GitlabNotification>, _guard: GitlabApiToken) -> rocket::http::Status { /*..*/ }
The GitlabNotification
type was a struct containing the fields that I cared about. Everything was decorated with serde::Deserialize
which the Json<>
helper required.
#[derive(Debug, Deserialize)]
struct GitlabProject {
id: i64,
name: String,
path_with_namespace: String,
}#[derive(Debug, Deserialize)]
struct GitlabNotification {
project: GitlabProject,
}
The result was very nice. It deserialized JSON bodies and handled unexpected MIME types and bodies that didn’t meet expectations with the appropriate HTTP status codes. Now I needed to replace that GitlabApiToken
request guard with a data guard.
I spent a while trying different things before I realized that there really is no way to use both a data guard for digest verification and JSON deserialization offered by Rocket at the same time. This took most of my morning to figure out especially because the solution isn’t documented for the rocket::data
module yet … it is in the local documentation. The solution was to implement the FromDataSimple
trait that verifies the digest and then uses serde to deserialize the body. The result was about 30 lines of code with the vast majority of it being match
statements for error handling.
By the early afternoon, I had the application receiving and validating notifications from GitHub. I also took the time to investigate configuring applications using “dotenv” files, logging, and learning about transforming byte arrays to hex representations. If nothing else, I am feeling more comfortable about my ability to find solutions to problems in Rust. Surprisingly enough, this normally looks like:
- encounter a need for something new (e.g., verifying a digest)
- search https://crates.io/ to see what is available
- update Cargo.toml,
cargo build
,cargo docs --local
- try the new module and see if it fits
The process is repeatable and seems to work pretty well. The “popularity” of crates seems to be reliable so far. I’m going to go with this approach.
After the second day…
I’m still enjoying Rust and its toolchain. I think that I’m going to go back and start reading some more since I haven’t really touched on all of those lifecycle markers that I’ve been blindly typing. I did hit a few snags which I pushed through easily enough.
Local documentation is really nice
It might be a side-effect of Rocket moving fast but I find myself needing to rely on cargo docs --local
more often than being able to use documentation on the Internet. I am very happy that Rust followed many other languages and embeds the documentation directly in the source code. This means that I can go to the implementation in my editor and read docs without a web browser as well.
Toolchains really matter
I shouldn’t need to say this but I’m going to anyway. The most striking thing about Rust so far is how effective the default tool chain is. Add to that a “world class” editor (JetBrains CLion in my case) and you have a very effective development environment.
The most awesome thing about the toolchain is that you honestly do not notice the compilation phase. This could speak to my background of using make when writing C & C++ many years ago. The compilation phase figured prominently in my workflow. cargo build
and cargo run
completely replace that. I don’t need to think about object files, making sure to compile from source to object only when necessary, etc. etc. Anyone that has done C development using only make and a compiler chain knows what I am talking about. The fact that I do not have to manage this at all completely streamlines the process.
Maybe I missed the boat on this phenom when I tinkered with go-lang or even rust-lang a few years back or maybe I’ve matured a bit as a developer. Stepping back and letting the toolchain do its thing for you is one of the most amazing revelations for me. I’ve been letting Python do that for me to some degree but no where near as much as with Rust.
Need to look at what cargo is up to…
So the Rust toolchain is simply a joy to work with. The question I have is what does it mean to my disk usage, back ups, etc. I decided to take a look at where things stand. The directory for my little GitHub webhook receiver is consuming 917MiB of disk space. I checked a webservice that I maintain in Python and it is using 123MiB including the local database files for development.
Not so surprising but cargo clean
exists. I ran cargo clean
followed by cargo run
to see what happened. Looks like a lot of that could have been garbage that “clean” discarded for me. The result is 269MiB which is about what I would expect. That is something that I am going to have to keep tabs on though… looks like ~/.cargo/ is 318MiB and ~/.rustup/ is 1.8GiB. The comparable Python 3.9 toolchain takes about 175MiB.
Closing up
It’s been an enlightening two days of tinkering with Rust part time. I can definitely see why some of my trusted friends have been so enamoured with it. I also look forward to continuing learning Rust and the environment. Hopefully I will feel compelled to write more of it down. This is the first time that I have really dug into a new programming language since blogging has really become a part of my life.