Verified Commit c3d055e3 authored by Katharina Fey's avatar Katharina Fey 🏴
Browse files

Final slide iteration

parent ca1f1301
......@@ -6,7 +6,12 @@ transition: none
::: notes
Yo and welcome to my talk
Welcome to my talk.
Excited to be here. Relevant subject for the future and something
that Rust excells at.
Sorry for doing this in English. You can ask me questions in German.
:::
......@@ -27,13 +32,21 @@ Yo and welcome to my talk
* IRC: **spacekookie** (on libera.chat and hackint)
* Twitter: @spacekookie
::: notes
Feel free to reach out to me after this talk if you want to ask
questions.
My twitter is one part politics, one part shitposts, one part Rust.
:::
---
## The work I do
<br />
* Distributed systems software researcher
* Currently working at **Ockam**
* Also working on EU funded research project **Irdest**
......@@ -43,6 +56,12 @@ Yo and welcome to my talk
<span><img src="imgs/irdest.png" height="150px" /></span>
</div>
::: notes
Ockam and Irdest are somewhat similar, attack the problem on different layers.
:::
---
## Some links to my work
......@@ -81,6 +100,16 @@ Here's some links that are relevant to my work.
* Developed a Nix(OS) workshop in 2020
* Contact me for details!
::: notes
Notable companies are Mozilla and Wire.
Rust: 1 day or 3 day variants.
NixOS: 1 day with optional 1 day guided hacking.
:::
---
## Meta
......@@ -88,10 +117,22 @@ Here's some links that are relevant to my work.
* Slides: **https://git.irde.st/kookiespace/talks/concurrency**
* Talk recording: **https://diode.zone/c/videokookie**
::: notes
Slides are available online with some of the code examples
:::
---
# What are all these buzzwords?!
::: notes
But before we really get going...
:::
---
## Getting jargon out the way
......@@ -105,6 +146,10 @@ Here's some links that are relevant to my work.
::: notes
Make sure everybody understands what we're talking about.
Jargon is useful, but also creates barriers!
Before we begin I want to make sure everybody has a good understanding
of what problems we are trying to solve here. This talk contains some
jargon that I would like to explain to you first.
......@@ -142,6 +187,14 @@ A lot of time is spent waiting
<img src="./imgs/concurrency1.png" height="500px"/>
::: notes
Instead we handle the first bit of work for connection A.
Then while waiting, we handle the first bit of connection B.
:::
---
## Parallelism
......@@ -162,6 +215,14 @@ processes or computers)
<img src="./imgs/parallelism1.png" height="600px"/>
::: notes
Handle connections simultaniously, as they come in.
Still waiting a lot of the time!
:::
---
## Data correctness
......@@ -192,9 +253,11 @@ correct (and verifiable!) result
::: notes
This occurs when the above guarantee is not given, i.e. data is
manipulated in a way that causes a "race condition" (will explain what
this means!)
Race conditions are concurrency bugs where data correctness is violated.
In general terms: invalid results because of parallelism or concurrency.
Reasons: unexpected order of execution or unknown program interactions.
:::
......@@ -204,6 +267,16 @@ this means!)
<img src="./imgs/race_condition0.png" height="500px"/>
::: notes
I am a big fan of system modelling. In very abstract terms: C is a
race condition that influences the state change from A to B1.
Instead: A -> B2.
To understand why this happens, let's look a bit at how computers work.
:::
---
......@@ -211,10 +284,8 @@ this means!)
::: notes
Before we get into some examples for these errors, I want to briefly
talk about how computers work
BRIEFLY! (this is a model)
BRIEFLY! This is a model that ignores a lot of things but it'll be
useful to demonstrate some of these problems.
:::
......@@ -226,8 +297,9 @@ BRIEFLY! (this is a model)
::: notes
It's good to keep a model of our computer in mind when writing code.
Many errors might seem obvious once we do this.
Models are always an abstraction and a model can be both simple _and_ useful.
In this model: CPU and Memory.
CPU: Executes instructions on cores and threads. Each thread has its
own cache with a shared cache between them.
......@@ -235,8 +307,8 @@ CPU: Executes instructions on cores and threads. Each thread has its
Memory: Connected to the CPU, usually _very_ slow to access in CPU
scale time. This is why values get (and stay) cached.
Synchronising Cache and Memory is the CPUs job. We don't have to
worry. BUT: concurrent prograbs CAN break this mechanism.
We want to be aware of Caches, but how they get synchronised is not
too important here.
:::
......@@ -248,13 +320,11 @@ worry. BUT: concurrent prograbs CAN break this mechanism.
::: notes
Say a thread wants to access some piece of memory. What will happen
is that it is copied to the core's cache, operated on, and then
synchronised.
Thread wants to use some data in a computation. It loads that data
from memory into the core's cache.
While this might seem like an implementation detail of your CPU, it is
the reason why race conditions are even possible so I think it's
important to keep in mind.
While this is a CPU implementation detail, it is also the reason why
many race conditions are possible.
:::
......@@ -264,12 +334,28 @@ important to keep in mind.
<img src="imgs/cpu2.png" height="500px" />
::: notes
Next we execute our instruction. In this case, we increment the value
by 1.
:::
---
## How do computers?
<img src="imgs/cpu3.png" height="500px" />
::: notes
And then we store the number back into memory.
In reality it would go to cache first and _then_ to memory but that's
not too relevant here.
:::
---
## How do computers?
......@@ -278,7 +364,9 @@ important to keep in mind.
::: notes
And they all lived happily ever after and we never wrote any bugs whatsoever.
And they all lived happily ever after.
and our programs never had any bugs whatsoever.
Thank you for coming to my talk
......@@ -290,7 +378,8 @@ Thank you for coming to my talk
::: notes
Some examples of race conditions now
Okay maybe not. Let's look at how this model breaks down when we add
more threads.
:::
......@@ -337,12 +426,16 @@ Some examples of race conditions now
---
We expected the result 6!
---
## "Soft" race condition
<br />
* Two threads `A` and `B`
* Over-writing a shared variable
* Overwriting a shared value
* Concurrency bug due to insufficient model
---
......@@ -705,7 +798,7 @@ error[E0499]: cannot borrow `counter` as mutable more than once at a time
Rust has a variety of ways to synchronise data explicitly
* Access control - `Mutex`, `RwLock`
* Shared memory locations - `Rc`, `Arc`, `RefCell`
* Shared memory locations - `Rc`, `Arc`
* Atomic operations - `std::sync::atomic`
* Thread synchronisation - `Barrier`
* Message passing - `std::sync::mpsc`
......@@ -727,7 +820,6 @@ Rust has a variety of ways to synchronise data explicitly
* Avoid having to borrow data via smart pointers
* `Rc` - reference counting wrapper
* `Arc` - same as `Rc`, but uses atomics, so thread-safe
* `RefCell` - runtime-guarded mutable access
```rust
let to_share = Arc::new("My favourite haiku".to_owned());
......@@ -778,7 +870,6 @@ fn sync(b: Arc<Barrier>, i1: Instant) {
b.wait();
println!("Synchronised: {:?}!", i1.elapsed());
}
fn main() {
let b = Arc::new(Barrier::new(2)); // Number of threads to block
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment