diff --git a/404.html b/404.html index 5b018f3..12be331 100644 --- a/404.html +++ b/404.html @@ -1 +1 @@ -
404 not found
\ No newline at end of file +404 not found
\ No newline at end of file diff --git a/atom.xml b/atom.xml index 857c1f3..7bd1296 100644 --- a/atom.xml +++ b/atom.xml @@ -1,8 +1,8 @@2023-06-06
Also we'll be using the Plug module to keep things simple. This module is at the heart of popular Elixir web frameworks such as Phoenix.
Plug lives at the heart of Phoenix's HTTP layer, and Phoenix puts Plug front and center.
First, we need Podman so head on over to the official instllation instructions and then come back.
Next we need an image that we can install Elixir on.. I'll use Alpine Linux.
podman search alpine --filter is-official
-
Go ahead and pull it:
podman image pull docker.io/library/alpine
-
Check that it installed:
podman image list alpine
-
Now run it (more networking options here):
# -p host-port:container-port
+ Dockerless, Elixir Web Application using Podman and Plug Dockerless, Elixir Web Application using Podman and Plug
2023-06-06
Demonstration of how to containerize an Elixir web app. Rootless, for security, inside of Podman.
Also we'll be using the Plug module to keep things simple. This module is at the heart of popular Elixir web frameworks such as Phoenix.
Plug lives at the heart of Phoenix's HTTP layer, and Phoenix puts Plug front and center.
Setup:
First, we need Podman so head on over to the official instllation instructions and then come back.
Next we need an image that we can install Elixir on.. I'll use Alpine Linux.
podman search alpine --filter is-official
+
Go ahead and pull it:
podman image pull docker.io/library/alpine
+
Check that it installed:
podman image list alpine
+
Now run it (more networking options here):
# -p host-port:container-port
podman run -dit --name alp --network=bridge -p 7070:7070 alpine /bin/ash
-
Connect to container and install Elixir:
Make sure the container is running first:
podman ps
-
If you don't see any output then you must start the container:
podman start alp
-
Attach to container:
podman attach alp
-
You should now be logged in as root (don't worry this root can't do harm outside of the container)
Add a user that has sudo (inside the container):
adduser alpine
+
Connect to container and install Elixir:
Make sure the container is running first:
podman ps
+
If you don't see any output then you must start the container:
podman start alp
+
Attach to container:
podman attach alp
+
You should now be logged in as root (don't worry this root can't do harm outside of the container)
Add a user that has sudo (inside the container):
adduser alpine
-# enter a password for this user
+# enter a password for this user
-# you could also change root's password with this command: passwd
-
Install the sudo program:
apk add sudo
-
Use visudo to allow the group wheel to use sudo:
visudo
+# you could also change root's password with this command: passwd
+
Install the sudo program:
apk add sudo
+
Use visudo to allow the group wheel to use sudo:
visudo
-# uncomment the line
-# %wheel ALL=(ALL:ALL) ALL
-
You'll need to know a few vim commands to use visudo. If you really don't know then search up a quick tutorial. Hey you might end up liking vim.
Now add the new user to the wheel group:
adduser alp wheel && su alp
-
You should be the alp user now and not root.
Install Elixir:
sudo apk add elixir && cd ~
-
Create an Elixir web app:
mix new myapp --sup
-
We'll now install the Plug dependency:
Edit mix.ex
vi mix.ex
+# uncomment the line
+# %wheel ALL=(ALL:ALL) ALL
+
You'll need to know a few vim commands to use visudo. If you really don't know then search up a quick tutorial. Hey you might end up liking vim.
Now add the new user to the wheel group:
adduser alp wheel && su alp
+
You should be the alp user now and not root.
Install Elixir:
sudo apk add elixir && cd ~
+
Create an Elixir web app:
mix new myapp --sup
+
We'll now install the Plug dependency:
Edit mix.ex
vi mix.ex
-# and make sure deps has plug as shown below
-
defp deps do
+# and make sure deps has plug as shown below
+
defp deps do
[
{:plug, "~> 1.14"},
{:plug_cowboy, "~> 2.0"}
]
end
-
Now install plug:
mix deps.get
+
Now install plug:
mix deps.get
-# You'll be asked to install hex. Type Y
-
Great!
Now let's make the app fault tolerant by having the application supervise it:
Edit lib/myapp/application.ex
def start(_type, _args) do
+# You'll be asked to install hex. Type Y
+
Great!
Now let's make the app fault tolerant by having the application supervise it:
Edit lib/myapp/application.ex
def start(_type, _args) do
children = [
{Plug.Cowboy, scheme: :http, plug: Myapp, options: [port: 7070]}
]
-
Lastly we'll add a router to the web app, Edit lib/myapp.ex
defmodule Myapp do
+
Lastly we'll add a router to the web app, Edit lib/myapp.ex
defmodule Myapp do
use Plug.Router
- # matches a route
+ # matches a route
plug :match
- # then forwards it to a dispatch
+ # then forwards it to a dispatch
plug :dispatch
get "/" do
@@ -51,16 +51,16 @@
send_resp(conn, 200, "Hi #{name}")
end
- # 404
+ # 404
get _ do
send_resp(conn, 404, "404 not found")
end
end
-
Start it up:
mix run --no-halt
-
Open up your browser and head over to localhost:7070/hello/world
Build an image from it
Now we should compile the web app into a binary and also create an image from the container to make it portable.
First let's compile the code.
In the container's terminal run:
MIX_ENV=prod
+
Start it up:
mix run --no-halt
+
Open up your browser and head over to localhost:7070/hello/world
Build an image from it
Now we should compile the web app into a binary and also create an image from the container to make it portable.
First let's compile the code.
In the container's terminal run:
MIX_ENV=prod
RELEASE_NAME=myapp
mix release.init
-
compile:
mix release
-
You should now have a binary located at:
_build/dev/rel/myapp/bin/myapp
You can run it as so:
_build/dev/rel/myapp/bin/myapp start
We can now exit the container so that we can build an image with this binary installed.
exit && exit
In your host terminal run:
podman commit alp alp:v2
-
Now we can start up the new container with the binary running by default:
podman run -dit --network=bridge -p 7070:7070 --name alp2 alp:v2 /home/alpine/myapp/_build/dev/rel/myapp/bin/myapp start
-
Wrap up
The main benefits of using Podman over Docker are security related. I won't rehash it all here so if you want to learn more check out linode's explanation.
I suggest learning more about plug and its router from elixirschool.
\ No newline at end of file
+
compile:
mix release
+
You should now have a binary located at:
_build/dev/rel/myapp/bin/myapp
You can run it as so:
_build/dev/rel/myapp/bin/myapp start
We can now exit the container so that we can build an image with this binary installed.
exit && exit
In your host terminal run:
podman commit alp alp:v2
+
Now we can start up the new container with the binary running by default:
podman run -dit --network=bridge -p 7070:7070 --name alp2 alp:v2 /home/alpine/myapp/_build/dev/rel/myapp/bin/myapp start
+
The main benefits of using Podman over Docker are security related. I won't rehash it all here so if you want to learn more check out linode's explanation.
I suggest learning more about plug and its router from elixirschool.
2023-08-06
The goal, in this blog post, will be to help beginner Rust programmers overcome the notion that Rust is a difficult language.
First and foremost, I will advocate for the Rust book from Brown University over the regular one. Here: https://rust-book.cs.brown.edu/ (it requires you to scroll all the way down and accept to participate).
It provides a more thorough explanation of Rust and includes simple quizzes to test your new gained knowledge. The original book does not contain quizzes and so many people believe they understand Rust but are completely mistaken.
Alongside with the book, you'll want to test the examples using either a local programming environment or the online environment https://play.rust-lang.org/
I'm assuming Rust is already installed on your system.
Create a new directory
mkdir project && cd project
-
Manually create a Cargo.toml for the workspace
vim Cargo.toml
-
[workspace]
+ How to Properly Learn Rust Programming How to Properly Learn Rust Programming
2023-08-06
The goal, in this blog post, will be to help beginner Rust programmers overcome the notion that Rust is a difficult language.
First and foremost, I will advocate for the Rust book from Brown University over the regular one. Here: https://rust-book.cs.brown.edu/ (it requires you to scroll all the way down and accept to participate).
It provides a more thorough explanation of Rust and includes simple quizzes to test your new gained knowledge. The original book does not contain quizzes and so many people believe they understand Rust but are completely mistaken.
Alongside with the book, you'll want to test the examples using either a local programming environment or the online environment https://play.rust-lang.org/
How to set up a local workspace (skip if you prefer the online Rust Playground)
I'm assuming Rust is already installed on your system.
Create a new directory
mkdir project && cd project
+
Manually create a Cargo.toml for the workspace
vim Cargo.toml
+
[workspace]
members = [
"app",
"applib",
]
-
Initialize the binary directory and the library directory
cargo init --bin app && cargo init --lib applib
-
Make applib available inside of app by editing app/Cargo.toml
[package]
+
Initialize the binary directory and the library directory
cargo init --bin app && cargo init --lib applib
+
Make applib available inside of app by editing app/Cargo.toml
[package]
name = "app"
version = "0.1.0"
edition = "2021"
[dependencies]
-#add this
+#add this
applib = {path = "../applib"}
-
Now applib's functions can be imported into the app binary. Edit app/src/main.rs
use applib;
+
Now applib's functions can be imported into the app binary. Edit app/src/main.rs
use applib;
fn main() {
- // applib::add() is located in applib/src/lib.rs
+ // applib::add() is located in applib/src/lib.rs
println!("100 + 100 = {}", applib::add(100, 100));
}
-
and run it:
cargo run --bin app
+
and run it:
cargo run --bin app
-# 200
-
Memory safety
I believe Rust's memory safe idiosyncrasies are what intimidate most people from this language. However, this is what makes it a safe language that doesn't require a garbage collector. It's essential to master this part of the language to write memory safe code.
The good news is that even if you write unsafe code it won't compile.
Ownership
Since there exists no garbage collector, owned variables are destructed once they go out of scope. Essentially, once a function or block expression returns. Unless the variable is returned or the variable was passed by borrowing (also known as pass by reference).
The following variable will be immutable for the entire duration of the program.
fn main() {
+# 200
+
Memory safety
I believe Rust's memory safe idiosyncrasies are what intimidate most people from this language. However, this is what makes it a safe language that doesn't require a garbage collector. It's essential to master this part of the language to write memory safe code.
The good news is that even if you write unsafe code it won't compile.
Ownership
Since there exists no garbage collector, owned variables are destructed once they go out of scope. Essentially, once a function or block expression returns. Unless the variable is returned or the variable was passed by borrowing (also known as pass by reference).
The following variable will be immutable for the entire duration of the program.
fn main() {
let num = 14;
- println!("{}", num); // prints 14
+ println!("{}", num); // prints 14
}
-
note: notice how num's type is implicitly assigned; a const variable would require explicitly assigning the type as so:
const NUM: i32 = 10;
In order to make it mutable we must add the keyword mut:
fn main() {
+
note: notice how num's type is implicitly assigned; a const variable would require explicitly assigning the type as so:
const NUM: i32 = 10;
In order to make it mutable we must add the keyword mut:
fn main() {
let mut num = 14;
num = 100;
- println!("{}", num); // prints 100
+ println!("{}", num); // prints 100
}
-
Suppose we pass this variable into a function
fn plusOne(num: i32) {
- println!("{}", num + 1); // prints 101
+
Suppose we pass this variable into a function
fn plusOne(num: i32) {
+ println!("{}", num + 1); // prints 101
}
fn main() {
let num = 14;
plusOne(num);
- println!("{}", num); // prints 100
+ println!("{}", num); // prints 100
}
-
Notice how the code was able to call println! on num after calling the function plusOne.
Normally Rust would not compile this program because any variable passed into a function (without an ampersand &) would destroy the variable.
However, Rust primitives such as u64 implement the Copy trait. The function plusOne implicitly received a Copy of the variable num and thus we did not transfer ownership.
Let's see how Rust transfers ownership of a struct that doesn't implement Copy
struct Person{
+
Notice how the code was able to call println! on num after calling the function plusOne.
Normally Rust would not compile this program because any variable passed into a function (without an ampersand &) would destroy the variable.
However, Rust primitives such as u64 implement the Copy trait. The function plusOne implicitly received a Copy of the variable num and thus we did not transfer ownership.
Let's see how Rust transfers ownership of a struct that doesn't implement Copy
struct Person{
name: String
}
fn getName(pers: Person) {
- println!("name is {}", pers.name); // drops the variable
+ println!("name is {}", pers.name); // drops the variable
}
fn main() {
let Carl = Person{name: String::from("Carl")};
- getName(Carl); // prints name is Carl
- // but we can no longer user Carl as it was dropped
- // println!("{}", Carl.name); would not work here as it did with num above
+ getName(Carl); // prints name is Carl
+ // but we can no longer user Carl as it was dropped
+ // println!("{}", Carl.name); would not work here as it did with num above
}
-
In order to use Carl, after calling getName, we'd be required to pass it as a reference using &
...
+
In order to use Carl, after calling getName, we'd be required to pass it as a reference using &
...
fn getName(pers: &Person) {
- println!("name is {}", pers.name); // does not drop the variable
+ println!("name is {}", pers.name); // does not drop the variable
}
fn main() {
let Carl = Person{name: String::from("Carl")};
- // placing an & before the variable passes it as borrowed
- getName(&Carl); // prints name is Carl
- println!("{}", Carl.name); // prints Carl
+ // placing an & before the variable passes it as borrowed
+ getName(&Carl); // prints name is Carl
+ println!("{}", Carl.name); // prints Carl
}
-
We could also transfer ownership of Carl to another variable just as we could into a function.
...
+
We could also transfer ownership of Carl to another variable just as we could into a function.
...
fn main() {
let Carl = Person{name: String::from("Carl")};
- // move ownership
+ // move ownership
let Carl2 = Carl;
- // Carl is no longer available
- // Carl2 is available
- println!("{}", Carl2.name); // prints Carl
+ // Carl is no longer available
+ // Carl2 is available
+ println!("{}", Carl2.name); // prints Carl
}
-
A variable can also be converted to mutable when moving it
...
+
A variable can also be converted to mutable when moving it
...
fn main() {
let Carl = Person{name: String::from("Carl")};
- // move to mutable ownership
+ // move to mutable ownership
let mut Carl2 = Carl;
Carl2.name = "Carl2".to_string();
- println!("{}", Carl2.name); // prints Carl2
+ println!("{}", Carl2.name); // prints Carl2
}
-
Alternatively, a function can take a mutable borrow and change the value without deleting the variable.
...
+
Alternatively, a function can take a mutable borrow and change the value without deleting the variable.
...
-// changes the borrowed variable without dropping it
+// changes the borrowed variable without dropping it
fn changeName(pers: &mut Person) {
pers.name = "Carl2".to_string();
}
@@ -101,77 +101,77 @@
let mut Carl2 = Carl;
changeName(&mut Carl2);
- println!("{}", Carl2.name); // prints Carl2
+ println!("{}", Carl2.name); // prints Carl2
}
-
However, if the variable is a primitive then ownership is not transferred.
fn main() {
+
However, if the variable is a primitive then ownership is not transferred.
fn main() {
let a = 10;
- // a is cloned and thus is not dropped
+ // a is cloned and thus is not dropped
let b = a;
- println!("{a} and {b} are clones"); // prints 10 and 10
+ println!("{a} and {b} are clones"); // prints 10 and 10
}
-
While a variable is borrowed mutably it cannot also be borrowed immutably. Only when the variable is no longer referenced by the borrower can it be again borrowed.
struct Person {
+
While a variable is borrowed mutably it cannot also be borrowed immutably. Only when the variable is no longer referenced by the borrower can it be again borrowed.
struct Person {
name: String
}
fn main() {
let mut Carl = Person{name: "Carl".to_string()};
- // borrow Carl mutably
+ // borrow Carl mutably
let borrowCarlMutably = &mut Carl;
- // Carl's name is indirectly changed to Carl2
+ // Carl's name is indirectly changed to Carl2
borrowCarlMutably.name = "Carl2".to_string();
- // Carl cannot be assigned to an immutable variable as so:
- // let c = &Carl;
- println!("{}", borrowCarlMutably.name); // prints Carl2
+ // Carl cannot be assigned to an immutable variable as so:
+ // let c = &Carl;
+ println!("{}", borrowCarlMutably.name); // prints Carl2
- // Carl can now again be borrowed immutably because
- // borrowCarlMutably is no longer referenced
+ // Carl can now again be borrowed immutably because
+ // borrowCarlMutably is no longer referenced
let borrowCarlImmutably = &Carl;
- println!("{}", borrowCarlImmutably.name); // prints Carl2
+ println!("{}", borrowCarlImmutably.name); // prints Carl2
- // Carl is still the owner as we only borrowed it above
- // Now that the mutable borrow is dropped we can use it again
- println!("{}", Carl.name); // prints Carl2
+ // Carl is still the owner as we only borrowed it above
+ // Now that the mutable borrow is dropped we can use it again
+ println!("{}", Carl.name); // prints Carl2
}
-
Variables stored as an Option type will be dropped in a match statement unless the unpacked variable is prefixed with ref
.
fn main(){
+
Variables stored as an Option type will be dropped in a match statement unless the unpacked variable is prefixed with ref
.
fn main(){
let name = Some(String::from("Carl"));
match name {
- // notice the ref keyword
- // Using Some(n) .. will not compile
+ // notice the ref keyword
+ // Using Some(n) .. will not compile
Some(ref n) => println!("Hello {}", n),
_ => println!("no value"),
}
- // if ref is not added, this would cause the program to not compile
- // since name would have been dropped in the match statement
+ // if ref is not added, this would cause the program to not compile
+ // since name would have been dropped in the match statement
println!("Hello again {}", name.unwrap());
}
-
Lifetimes
Rust also requires that a borrowed variable's data have a lifetime. Simply because you wouldn't want your borrowed variable to be dropped before you're done using it.
In fact the mutable variable we declared in the code right above was implementing lifetimes.
This code will NOT compile because the lifetime of borrowCarlMutably ends when we use Carl.
...
+
Lifetimes
Rust also requires that a borrowed variable's data have a lifetime. Simply because you wouldn't want your borrowed variable to be dropped before you're done using it.
In fact the mutable variable we declared in the code right above was implementing lifetimes.
This code will NOT compile because the lifetime of borrowCarlMutably ends when we use Carl.
...
fn main() {
let mut Carl = Person{name: "Carl".to_string()};
- // borrow Carl mutably
+ // borrow Carl mutably
let borrowCarlMutably = &mut Carl;
- // using Carl means borrowCarlMutably can no longer be used
- // because its lifetime has gone out of scope
+ // using Carl means borrowCarlMutably can no longer be used
+ // because its lifetime has gone out of scope
println!("{}", Carl.name);
- // WRONG!
- // this should be moved above println before using Carl
+ // WRONG!
+ // this should be moved above println before using Carl
borrowCarlMutably.name = "john".to_string();
}
-
So far we haven't seen the syntax of lifetimes, even though I have passed borrowed variables into functions above.
The reason is that Rust elides (omits) them for for simple functions that don't cause the compiler to decide which to return.
struct Person {
+
So far we haven't seen the syntax of lifetimes, even though I have passed borrowed variables into functions above.
The reason is that Rust elides (omits) them for for simple functions that don't cause the compiler to decide which to return.
struct Person {
name: String
}
-// Elided
+// Elided
fn changeName(pers: &mut Person) {
pers.name = "joe".to_string();
}
-// Expanded
+// Expanded
fn changeName<'a>(pers: &'a mut Person) {
pers.name = "joe".to_string();
}
@@ -181,7 +181,7 @@
changeName(&mut Carl);
println!("{}", Carl.name);
}
-
However more complicated functions that have borrowed variables with different lifetimes will require explicitly telling the compiler.
Note that an apostraphe is required for a lifetime's syntax however the name can be anything. It's common convention to use different letters for different lifetimes (e.g., 'a
, 'b
)
struct Person {
+
However more complicated functions that have borrowed variables with different lifetimes will require explicitly telling the compiler.
Note that an apostraphe is required for a lifetime's syntax however the name can be anything. It's common convention to use different letters for different lifetimes (e.g., 'a
, 'b
)
struct Person {
name: String
}
@@ -193,19 +193,19 @@
let mut Carl = Person{name: "Carl".to_string()};
{
- // variables in this scope have different lifetimes
- // than those outside of {}
+ // variables in this scope have different lifetimes
+ // than those outside of {}
let newName = "Mario";
changeName(&mut Carl, newName);
}
- println!("{}", Carl.name); // prints Mario
+ println!("{}", Carl.name); // prints Mario
}
-
Essentially, Rust is making sure that any borrowed value returned from the function will live at least as long as the lifetime of one of the inputs. The compiler can then make a decision as to whether your code is valid.
struct Person {
+
Essentially, Rust is making sure that any borrowed value returned from the function will live at least as long as the lifetime of one of the inputs. The compiler can then make a decision as to whether your code is valid.
struct Person {
name: String
}
-// now we return a borrowed variable with a lifetime
-// of newName
+// now we return a borrowed variable with a lifetime
+// of newName
fn changeName<'a, 'b>(pers: &'a mut Person, newName: &'b str) -> &'b str {
pers.name = newName.to_string();
&newName
@@ -219,9 +219,9 @@
}
println!("{}", Carl.name);
}
-
There exists a reserved lifetime called 'static
that signifies to the compiler that the variable will live for the entire lifetime of the program. The variable will be embedded into the binary.
static GLOBAL: &'static str = "global static variable";
+
There exists a reserved lifetime called 'static
that signifies to the compiler that the variable will live for the entire lifetime of the program. The variable will be embedded into the binary.
static GLOBAL: &'static str = "global static variable";
fn main() {
println!("{}", GLOBAL);
}
-
In my opinion, much of the struggle that people have wrestling with the borrow checker stems from gaps in knowledge related to ownership and lifetimes.
The quizzes in the brown.edu git book definitely help filling those gaps.
If Rust still seems confusing then I recommend A half-hour to learn Rust.
\ No newline at end of file
+
In my opinion, much of the struggle that people have wrestling with the borrow checker stems from gaps in knowledge related to ownership and lifetimes.
The quizzes in the brown.edu git book definitely help filling those gaps.
If Rust still seems confusing then I recommend A half-hour to learn Rust.
\ No newline at end of file diff --git a/blog/index.html b/blog/index.html index d408f44..eb99f26 100644 --- a/blog/index.html +++ b/blog/index.html @@ -1 +1 @@ -2023-03-14
The installation page goes through the process of setting it up.
For Fedora users it's as simple as:
sudo dnf install usbguard
-
Make sure any USB you want to allow through is connected to a port. Then generate an initial ruleset:
# You might need to switch to root to run this
+ Security Hardening Linux OS Security Hardening Linux OS
2023-03-14
These are some extra steps that you can implement to harden a Linux system.
USBGuard prevents unauthorized USB devices from connecting.
The installation page goes through the process of setting it up.
For Fedora users it's as simple as:
sudo dnf install usbguard
+
Make sure any USB you want to allow through is connected to a port. Then generate an initial ruleset:
# You might need to switch to root to run this
sudo usbguard generate-policy > /etc/usbguard/rules.conf
-
Enable the service on startup:
sudo systemctl start usbguard
+
Enable the service on startup:
sudo systemctl start usbguard
sudo systemctl enable usbguard
-
Allowing a new USB device is as simple as:
# Plug in a new device and find it using
+
Allowing a new USB device is as simple as:
# Plug in a new device and find it using
sudo usbguard list-devices
-
New devices should be automatically blocked and appear as
22: block id 08...
To allow the device simply run:
# Note that this won't make it permanent.
+
New devices should be automatically blocked and appear as
22: block id 08...
To allow the device simply run:
# Note that this won't make it permanent.
sudo usbguard allow-device 22
-
To allow the device permanently run:
sudo usbguard allow-device 22 -p
-
NTS over NTP
Network Time Protocol allows your device to synchronize its time with highly accurate atomic clock servers. However, it's very old and abused for DDoS amplification attacks.
NTS extends NTP by adding encrypted cookies that authenticate that the time data has not been tampered with. This cookie is recomputed every exchange of client/server to prevent linkability.
NTS also provides a unique identifier to detect spoofed packets.
As well as an AHEAD algorithm used to encrypt the cookie.
Here's the full draft.
Chrony can be easily configured for NTS as follows:
Edit /etc/chrony.conf
(make sure it's installed first)
# List of NTS servers:
+
To allow the device permanently run:
sudo usbguard allow-device 22 -p
+
NTS over NTP
Network Time Protocol allows your device to synchronize its time with highly accurate atomic clock servers. However, it's very old and abused for DDoS amplification attacks.
NTS extends NTP by adding encrypted cookies that authenticate that the time data has not been tampered with. This cookie is recomputed every exchange of client/server to prevent linkability.
NTS also provides a unique identifier to detect spoofed packets.
As well as an AHEAD algorithm used to encrypt the cookie.
Here's the full draft.
Chrony can be easily configured for NTS as follows:
Edit /etc/chrony.conf
(make sure it's installed first)
# List of NTS servers:
server nts.netnod.se iburst nts
@@ -18,40 +18,40 @@
server ptbtime2.ptb.de iburst nts
server ptbtime3.ptb.de iburst nts
-# NTS cookie jar to minimise NTS-KE requests upon chronyd restart
+# NTS cookie jar to minimise NTS-KE requests upon chronyd restart
ntsdumpdir /var/lib/chrony
-
then restart chrony
sudo systemctl restart chronyd
-
ICMP tunneling
ICMP is another protocol that can be abused by an attacker to exfiltrate private data. It can also be abused as a DDoS attack.
In Fedora ICMP's echo request/ echo reply can be disabled with the firwall:
# first check if they're already disabled
+
then restart chrony
sudo systemctl restart chronyd
+
ICMP tunneling
ICMP is another protocol that can be abused by an attacker to exfiltrate private data. It can also be abused as a DDoS attack.
In Fedora ICMP's echo request/ echo reply can be disabled with the firwall:
# first check if they're already disabled
firewall-cmd --query-icmp-block=echo-request
firewall-cmd --query-icmp-block=echo-reply
-# if they both say not then disable them
+# if they both say not then disable them
sudo firewall-cmd --add-icmp-block=echo-request
sudo firewall-cmd --add-icmp-block=echo-reply
-
I won't cover firewalls in this small guide as they should each be configured to the user's needs as well as the specific OS.
Blocking ICMP pings is generally seen as bad practice. Better would be using whitelist filters in the firewall, instead of blocking them all.
Hardening the Kernel
The simplest way to pass arguments to the kernel is with sysctl.
Simply edit /etc/sysctl.conf
# blocks kernel pointers from being exposed to an attacker
+
I won't cover firewalls in this small guide as they should each be configured to the user's needs as well as the specific OS.
Blocking ICMP pings is generally seen as bad practice. Better would be using whitelist filters in the firewall, instead of blocking them all.
Hardening the Kernel
The simplest way to pass arguments to the kernel is with sysctl.
Simply edit /etc/sysctl.conf
# blocks kernel pointers from being exposed to an attacker
kernel.kptr_restrict=2
vm.mmap_rnd_bits=32
vm.mmap_rnd_compat_bits=16
-# avoid kernel memory address exposures
+# avoid kernel memory address exposures
kernel.dmesg_restrict=1
-# disallow kernel/cpu profiling from non root
+# disallow kernel/cpu profiling from non root
kernel.dmesg_restrict=1
kernel.perf_event_paranoid=3
-# disallow kernel swapping while running
+# disallow kernel swapping while running
kernel.kexec_load_disabled=1
-# Avoid non-ancestor ptrace access to running processes and their credentials.
+# Avoid non-ancestor ptrace access to running processes and their credentials.
kernel.yama.ptrace_scope=1
-# Disable User Namespaces, as it opens up a large attack surface to unprivileged users.
+# Disable User Namespaces, as it opens up a large attack surface to unprivileged users.
user.max_user_namespaces=0
-# Turn off unprivileged eBPF access.
+# Turn off unprivileged eBPF access.
kernel.unprivileged_bpf_disabled=1
-# harden BPF JIT
+# harden BPF JIT
net.core.bpf_jit_harden=2
-
Then make the changes without rebooting:
sudo sysctl -p /etc/sysctl.conf
-
\ No newline at end of file
+
Then make the changes without rebooting:
sudo sysctl -p /etc/sysctl.conf
+
More hardening parameters can be found here and also here.
\ No newline at end of file diff --git a/blog/using-rust-axum-postgresql-and-tokio-to-build-a-blog/index.html b/blog/using-rust-axum-postgresql-and-tokio-to-build-a-blog/index.html index 1320557..f9fb042 100644 --- a/blog/using-rust-axum-postgresql-and-tokio-to-build-a-blog/index.html +++ b/blog/using-rust-axum-postgresql-and-tokio-to-build-a-blog/index.html @@ -1,5 +1,5 @@ -2023-03-11
cargo new blog-rs --bin
-
The dependencies I'll be using go in Cargo.toml
[package]
+ Using Rust, Axum, PostgreSQL, and Tokio to build a Blog Using Rust, Axum, PostgreSQL, and Tokio to build a Blog
2023-03-11
In this tutorial we'll be creating a very basic blog to get the hang of Axum.
Sure, you could just use a static site generator and push the files up to Github pages, but where's the fun in that?
Setting up the project
cargo new blog-rs --bin
+
The dependencies I'll be using go in Cargo.toml
[package]
name = "blog-rs"
version = "0.1.0"
edition = "2021"
@@ -8,9 +8,9 @@
tokio = {version="1.28.0", features = ["macros", "rt-multi-thread"]}
axum = "0.6.17"
askama = {version="0.12.0", features=["markdown"]}
-sqlx = {version = "0.6.3", features = ["runtime-tokio-rustls", "postgres", "macros", "time"]}
+sqlx = {version = "0.6.3", features = ["runtime-tokio-rustls", "postgres", "macros", "time"]}
tower-http = {version = "0.4", features=["full"]}
-
Edit main.rs
and create a server at localhost:4000/
use axum::{http::StatusCode, routing::get, Router};
+
Edit main.rs
and create a server at localhost:4000/
use axum::{http::StatusCode, routing::get, Router};
async fn index() -> String {
String::from("homepage")
@@ -26,58 +26,58 @@
.await
.unwrap();
}
-
Spin up the server with:
cargo run main.rs
-
A brief introduction to Tokio and Axum
Let's unpack Axum and Tokio a bit.
Axum is a web framework built with Tokio, Hyper, and Tower.
use axum::{http::StatusCode, routing::get, Router};
-
Tokio allows us to run asynchronous non-blocking code (but it can also run blocking code if needed). Its componets include:
- A scheduler that manages tasks pushed onto a run queue.
- An async I/O driver that enables using net, process, signal.
- A time driver that enables using
tokio::time
on the runtime. - Core threads that should have no blocking code and blocking threads that can be spawned on demand to handle any blocking code.
#[tokio::spawn]
+
Spin up the server with:
cargo run main.rs
+
A brief introduction to Tokio and Axum
Let's unpack Axum and Tokio a bit.
Axum is a web framework built with Tokio, Hyper, and Tower.
use axum::{http::StatusCode, routing::get, Router};
+
Tokio allows us to run asynchronous non-blocking code (but it can also run blocking code if needed). Its componets include:
- A scheduler that manages tasks pushed onto a run queue.
- An async I/O driver that enables using net, process, signal.
- A time driver that enables using
tokio::time
on the runtime. - Core threads that should have no blocking code and blocking threads that can be spawned on demand to handle any blocking code.
#[tokio::spawn]
async fn main() {
- // code here should never block
- // unless in a closure and passed to tokio::task::spawn_blocking()
+ // code here should never block
+ // unless in a closure and passed to tokio::task::spawn_blocking()
}
-
This is the equivalent of
fn main() {
+
This is the equivalent of
fn main() {
tokio::runtime::Builder::new_multi_thread()
.enable_all()
.build()
.unwrap()
.block_on(async {
- // Runtime's entry point
+ // Runtime's entry point
})
}
-
Axum's Router matches a path to handler.
let app = Router::new()
+
Axum's Router matches a path to handler.
let app = Router::new()
.route("/", get(index));
-
Handlers can accept zero or more extractors as arguments.
The ordering of the extractors is important as only one extractor can consume the request's body. It should be placed as the last argument furthest to the right in your handler.
Anything that implements the IntoResponse trait can be returned by handlers. Axum takes care of implementing it for common types.
async fn index() -> String {
+
Handlers can accept zero or more extractors as arguments.
The ordering of the extractors is important as only one extractor can consume the request's body. It should be placed as the last argument furthest to the right in your handler.
Anything that implements the IntoResponse trait can be returned by handlers. Axum takes care of implementing it for common types.
async fn index() -> String {
String::from("homepage")
}
-
Just as an example, let's use Axum's TypedHeader extractor to send the user back their User-Agent (I'll remove this extractor and feature after this demonstration).
First I enable the headers feature in Cargo.toml
axum = {version= "0.6.17", features = ["headers"]}
-
Next I import the extractor and edit the index handler to extract the user agent and send it back to the user as a response.
use axum::{
+
Just as an example, let's use Axum's TypedHeader extractor to send the user back their User-Agent (I'll remove this extractor and feature after this demonstration).
First I enable the headers feature in Cargo.toml
axum = {version= "0.6.17", features = ["headers"]}
+
Next I import the extractor and edit the index handler to extract the user agent and send it back to the user as a response.
use axum::{
http::StatusCode, routing::get, Router,
extract::{TypedHeader},
headers::UserAgent,
};
-// go ahead and run "cargo run main.rs"
-// localhost:4000 should now print out your user agent
+// go ahead and run "cargo run main.rs"
+// localhost:4000 should now print out your user agent
async fn index(TypedHeader(user_agent): TypedHeader<UserAgent>) -> String {
String::from(user_agent.as_str())
}
-
Configuring the database
Let's get our database up and running. First make sure to download and install PostgreSQL.
Make sure the service is started (I'm running linux so here's how I'd do it)
sudo systemctl start postgresql
-
Login using psql
sudo -u postgres psql postgres
-
Setup a user and database (inside of psql run the following commands with your own username and password)
CREATE ROLE myuser LOGIN PASSWORD 'mypass';
+
Configuring the database
Let's get our database up and running. First make sure to download and install PostgreSQL.
Make sure the service is started (I'm running linux so here's how I'd do it)
sudo systemctl start postgresql
+
Login using psql
sudo -u postgres psql postgres
+
Setup a user and database (inside of psql run the following commands with your own username and password)
CREATE ROLE myuser LOGIN PASSWORD 'mypass';
CREATE DATABASE mydb WITH OWNER = myuser;
\q
-
Login with the new user and type in your password when prompted. In my case "mypass".
psql -h localhost -d mydb -U myuser
-
Create a table that will store our blog posts.
CREATE TABLE myposts(
+
Login with the new user and type in your password when prompted. In my case "mypass".
psql -h localhost -d mydb -U myuser
+
Create a table that will store our blog posts.
CREATE TABLE myposts(
post_id SERIAL PRIMARY KEY,
post_date DATE NOT NULL DEFAULT CURRENT_DATE,
post_title TEXT,
post_body TEXT)
);
-
Great! Personally, I enjoy creating blog posts in markdown format. For my editor I use Ghostwriter.
I say this because I'll be storing raw markdown into the field labeled post_body.
We can now connect our app to PostgreSQL
main.rs
use sqlx::postgres::PgPoolOptions;
+
Great! Personally, I enjoy creating blog posts in markdown format. For my editor I use Ghostwriter.
I say this because I'll be storing raw markdown into the field labeled post_body.
We can now connect our app to PostgreSQL
main.rs
use sqlx::postgres::PgPoolOptions;
use sqlx::FromRow;
use sqlx::types::time::Date;
use std::sync::Arc;
-// the fields we'll be retrieving from an sql query
+// the fields we'll be retrieving from an sql query
#[derive(FromRow, Debug, Clone)]
pub struct Post {
@@ -91,41 +91,41 @@
let pool = PgPoolOptions::new()
.max_connections(5)
- // use your own credentials
+ // use your own credentials
.connect("postgres://myuser:mypass@localhost/mydb")
.await
.expect("couldn't connect to the database");
- // I fetch all of the posts at the start of the program
- // to avoid hitting the db for each page request
+ // I fetch all of the posts at the start of the program
+ // to avoid hitting the db for each page request
let posts = sqlx::query_as::<_, Post>("select post_title, post_date, post_body from myposts")
.fetch_all(&pool)
.await
.unwrap();
- // Above we retrieved Vec<Post>
- // We place it in an Arc for thread-safe referencing.
+ // Above we retrieved Vec<Post>
+ // We place it in an Arc for thread-safe referencing.
let shared_state = Arc::new(posts);
let app = Router::new()
.route("/", get(index))
.route("/post/:query_title", get(post))
- // We pass the shared state to our handlers
+ // We pass the shared state to our handlers
.with_state(shared_state);
-//
+//
-
Inserting markdown into the database
I suggest creating a new binary where we simply pass it a title and a markdown file as arguments.
Edit Cargo.toml to include a second binary that will insert a markdown file into the database
[[bin]]
+
Inserting markdown into the database
I suggest creating a new binary where we simply pass it a title and a markdown file as arguments.
Edit Cargo.toml to include a second binary that will insert a markdown file into the database
[[bin]]
name = "blog-rs"
path = "src/main.rs"
[[bin]]
name = "markd"
path = "src/bin/markd.rs"
-
Create a markdown file inside of src/bin/post.md with content of your choosing. Here's mine:
src/bin/post.md
# This is a post
+
Create a markdown file inside of src/bin/post.md with content of your choosing. Here's mine:
src/bin/post.md
# This is a post
with some content
-
Markd is very rudimentary.
It lacks any capabilities besides inserting a single file into our database.
Create src/bin/markd.rs
use std::env;
+
Markd is very rudimentary.
It lacks any capabilities besides inserting a single file into our database.
Create src/bin/markd.rs
use std::env;
use sqlx::postgres::PgPoolOptions;
use std::fs::File;
use std::io::Read;
@@ -133,14 +133,14 @@
#[tokio::main]
async fn main() -> Result<(), sqlx::Error>{
- // collects the arguments when we run:
- // cargo run --bin markd "A title" ./post.md
+ // collects the arguments when we run:
+ // cargo run --bin markd "A title" ./post.md
let args: Vec<String> = env::args().collect();
let mut inserter;
- // argument 2 should contain the file name
+ // argument 2 should contain the file name
match File::open(&args[2]) {
Ok(mut file) => {
let mut content = String::new();
@@ -152,12 +152,12 @@
let pool = PgPoolOptions::new()
.max_connections(3)
- // use your own credentials below
+ // use your own credentials below
.connect("postgres://myuser:mypass@localhost/mydb")
.await
.expect("couldn't create pool");
- // insert the title and file contents into the database
+ // insert the title and file contents into the database
let row: (i64,) = sqlx::query_as("insert into myposts (post_title, post_body) values ($1, $2) returning post_id")
.bind(&args[1])
.bind(inserter)
@@ -166,11 +166,11 @@
Ok(())
}
-
We can now use this separate binary to insert our posts into the database using the following command:
cargo run --bin markd "My post's title" ./post.md
-
Of course you'd give a different title for each new post.
Using Askama to render markdown into templates
So far so good. How about we add Askama template engine to render our markdown posts into html.
edit main.rs
use askama::Template;
+
We can now use this separate binary to insert our posts into the database using the following command:
cargo run --bin markd "My post's title" ./post.md
+
Of course you'd give a different title for each new post.
Using Askama to render markdown into templates
So far so good. How about we add Askama template engine to render our markdown posts into html.
edit main.rs
use askama::Template;
-// Each post template will be populated with the values
-// located in the shared state of the handlers.
+// Each post template will be populated with the values
+// located in the shared state of the handlers.
#[derive(Debug)]
#[template(path = "posts.html")]
@@ -179,12 +179,12 @@
pub post_date: String,
pub post_body: &'a str,
}
-
Askama looks for templates outside of the src folder. Create a folder called templates in the same spot that your Cargo.toml resides.
We should also make a base template that our post template can extend from.
templates/base.html
<!DOCTYPE html>
+
Askama looks for templates outside of the src folder. Create a folder called templates in the same spot that your Cargo.toml resides.
We should also make a base template that our post template can extend from.
templates/base.html
<!DOCTYPE html>
<html lang="en">
<head>
<title>{{ post_title }}</title>
- <!-- we'll use Tower middlewar middleware to serve this static content soon-->
- <link href="/assets/post.css rel="stylesheet" type="text/css">
+ <!-- we'll use Tower middlewar middleware to serve this static content soon-->
+ <link href="/assets/post.css rel="stylesheet" type="text/css">
</head>
<body>
@@ -194,7 +194,7 @@
</div>
</body>
</html>
-
templates/posts.html
{% extends "base.html" %}
+
templates/posts.html
{% extends "base.html" %}
{% block post %}
<div class="post_title">
@@ -207,15 +207,15 @@
{{ post_body|markdown }}
</div>
{% endblock post %}
-
We need a handler to serve our static CSS. Fortunately, Tower has middleware we can use including tower_http to take care of this.
First create a folder titled assets
in the same spot that main.rs resides. Inside of assets create post.css
with some CSS.
assets/post.css
body {
+
We need a handler to serve our static CSS. Fortunately, Tower has middleware we can use including tower_http to take care of this.
First create a folder titled assets
in the same spot that main.rs resides. Inside of assets create post.css
with some CSS.
assets/post.css
body {
background: #101010;
}
#Post {
background: #D5D9E7;
}
-
edit main.rs
use tower_http::services::ServeDir;
+
edit main.rs
use tower_http::services::ServeDir;
-// edit the router to serve static content from the assets folder
+// edit the router to serve static content from the assets folder
let app = Router::new()
.route("/", get(index))
@@ -223,19 +223,19 @@
.with_state(shared_state)
.nest_service("/assets", ServeDir::new("assets"));
-
We now need some logic in the post handler to match the user's query to any post with the same title.
edit main.rs
// We use two extractors in the arguments
-// Path to grab the query and State that has all our posts
+
We now need some logic in the post handler to match the user's query to any post with the same title.
edit main.rs
// We use two extractors in the arguments
+// Path to grab the query and State that has all our posts
async fn post(Path(query_title): Path<String>, State(state): State<Arc<Vec<Post>>>) -> impl IntoResponse {
- // A default template or else the compiler complains
+ // A default template or else the compiler complains
let mut template = PostTemplate{post_title: "none", post_date: "none".to_string(), post_body: "none"};
- // We look for any post with the same title as the user's query
+ // We look for any post with the same title as the user's query
for i in 0..state.len() {
if query_title == state[i].post_title {
- // We found one so mutate the template variable and
- // populate it with the post that the user requested
+ // We found one so mutate the template variable and
+ // populate it with the post that the user requested
template = PostTemplate{post_title: &state[i].post_title,
post_date: state[i].post_date.to_string(),
post_body: &state[i].post_body
@@ -246,20 +246,20 @@
}
}
- // 404 if no title found matching the user's query
+ // 404 if no title found matching the user's query
if &template.post_title == &"none" {
return (StatusCode::NOT_FOUND, "404 not found").into_response();
}
- // render the template into HTML and return it to the user
+ // render the template into HTML and return it to the user
match template.render() {
Ok(html) => Html(html).into_response(),
Err(_) => (StatusCode::INTERNAL_SERVER_ERROR, "try again later").into_response()
}
}
-
Ok great, but how will the user ever find our posts?
How about sending them a list of links to all our posts.
edit main.rs
// create an Axum template for our homepage
-// index_title is the html page's title
-// index_links are the titles of the blog posts
+
Ok great, but how will the user ever find our posts?
How about sending them a list of links to all our posts.
edit main.rs
// create an Axum template for our homepage
+// index_title is the html page's title
+// index_links are the titles of the blog posts
#[derive(Template)]
#[template(path = "index.html")]
@@ -268,7 +268,7 @@
pub index_links: &'a Vec<String>,
}
-// Then populate the template with all post titles
+// Then populate the template with all post titles
async fn index(State(state): State<Arc<Vec<Post>>>) -> impl IntoResponse{
@@ -289,7 +289,7 @@
).into_response(),
}
}
-
Index template will loop through our Vec of titles and render them as anchor links.
templates/index.html
<!DOCTYPE html>
+
Index template will loop through our Vec of titles and render them as anchor links.
templates/index.html
<!DOCTYPE html>
<html>
<head>
<title> {{ index_title }} </title>
@@ -305,26 +305,26 @@
</div>
</body>
</html>
-
Remember to insert your markdown into the database with this command
cargo run --bin markd "Some title" ./post.md
-
And now we run the server
cargo run --bin blog-rs
-
We're pretty much done, but I want to demonstrate how to create a custom Askama filter.
I'll be adding dashes to the titles to make them more URL friendly.
Because this:
localhost:4000/post/Some-Title
is more readable than this:
localhost:4000/post/Some&20Title
However, this will also make each post title have dashes. My simple "rmdashes" filter will remove the dashes to make the titles appear more pleasant in the page.
Askama searches for custom filters inside of mod filters {}
edit main.rs
mod filters {
+
Remember to insert your markdown into the database with this command
cargo run --bin markd "Some title" ./post.md
+
And now we run the server
cargo run --bin blog-rs
+
We're pretty much done, but I want to demonstrate how to create a custom Askama filter.
I'll be adding dashes to the titles to make them more URL friendly.
Because this:
localhost:4000/post/Some-Title
is more readable than this:
localhost:4000/post/Some&20Title
However, this will also make each post title have dashes. My simple "rmdashes" filter will remove the dashes to make the titles appear more pleasant in the page.
Askama searches for custom filters inside of mod filters {}
edit main.rs
mod filters {
- // This filter removes the dashes that I will be adding in main()
+ // This filter removes the dashes that I will be adding in main()
pub fn rmdashes(title: &str) -> askama::Result<String> {
Ok(title.replace("-", " ").into())
}
}
-// I replace spaces with dashes so that the title appears
-// easier to read in the URL. localhost:4000/post/a-title
+// I replace spaces with dashes so that the title appears
+// easier to read in the URL. localhost:4000/post/a-title
async fn main() {
for post in &mut posts {
post.post_title = post.post_title.replace(" ", "-");
}
- //
-
Now we use the rmdashes filter in posts.html
as we don't want the dashes in the web page. Only in the URL.
edit templates/posts.html
{% extends "base.html" %}
+ //
+
Now we use the rmdashes filter in posts.html
as we don't want the dashes in the web page. Only in the URL.
edit templates/posts.html
{% extends "base.html" %}
{% block post %}
<div class="post_title">
@@ -337,6 +337,6 @@
{{ post_body|markdown }}
</div>
{% endblock post %}
-
Optimizing the final binary
Use this command to view file sizes, on linux: ls -lh blog-rs
My binary, inside of target/debug/blog-rs
, is at 126M.
Here's an excellent guide on optimizing your binary.
Building my binary with the --release flag reduces the size to only 13M.
cargo build --release
-
An optimized binary now resides in target/release/blog-rs
Want a smaller binary size?
UPX gets my binary down further to 3.9M
upx /target/release/blog-rs
-
Here's the full code for this project: https://github.com/spacedimp/rust-blog-example
\ No newline at end of file
+
Use this command to view file sizes, on linux: ls -lh blog-rs
My binary, inside of target/debug/blog-rs
, is at 126M.
Here's an excellent guide on optimizing your binary.
Building my binary with the --release flag reduces the size to only 13M.
cargo build --release
+
An optimized binary now resides in target/release/blog-rs
Want a smaller binary size?
UPX gets my binary down further to 3.9M
upx /target/release/blog-rs
+
Here's the full code for this project: https://github.com/spacedimp/rust-blog-example
\ No newline at end of file diff --git a/blog/using-rust-tauri-and-sveltekit-to-build-a-note-taking-app/index.html b/blog/using-rust-tauri-and-sveltekit-to-build-a-note-taking-app/index.html index 232f586..fb60474 100644 --- a/blog/using-rust-tauri-and-sveltekit-to-build-a-note-taking-app/index.html +++ b/blog/using-rust-tauri-and-sveltekit-to-build-a-note-taking-app/index.html @@ -1,6 +1,6 @@ -2023-04-05
Tauri allows us to build fast, cross-platform, and small sized apps using HTML, CSS, and JavaScript.
It accomplishes this by using WebViews. A WebView lets you embed web content (HTML,CSS, JavaScript) into an application without needing a full-fledged web browser.
Rust is used for the backend logic and SvelteKit for the frontend.
Each OS uses a different WebView rendering engine:
Make sure Rust and the Tauri dependencies are installed as described here.
SvelteKit requires Node.js. I install it using Fedora's package manager.
sudo dnf install nodejs
-
Instead of npm, I'll install pnpm as the Node.js package manager
sudo npm install -g pnpm
-
Now we can initialize a new svelte project.
$ mkdir notes && cd notes
+ Using Rust, Tauri, and SvelteKit to Build a Note Taking App Using Rust, Tauri, and SvelteKit to Build a Note Taking App
2023-04-05
In this blog post, I'll be guiding you through the process of building a note taking app using Tauri. Move over Electron :)
Tauri allows us to build fast, cross-platform, and small sized apps using HTML, CSS, and JavaScript.
It accomplishes this by using WebViews. A WebView lets you embed web content (HTML,CSS, JavaScript) into an application without needing a full-fledged web browser.
Rust is used for the backend logic and SvelteKit for the frontend.
Each OS uses a different WebView rendering engine:
- Windows uses WebView2
- Linux uses WebKit
Setting up the project
Make sure Rust and the Tauri dependencies are installed as described here.
SvelteKit requires Node.js. I install it using Fedora's package manager.
sudo dnf install nodejs
+
Instead of npm, I'll install pnpm as the Node.js package manager
sudo npm install -g pnpm
+
Now we can initialize a new svelte project.
$ mkdir notes && cd notes
$ pnpm create svelte
// hit enter to create the project in the current directory
@@ -15,12 +15,12 @@
// @next gets latest version
$ pnpm add -D @sveltejs/adapter-static@next
-
edit svelte.config.js
// change adapter-auto to adapter-static
+
edit svelte.config.js
// change adapter-auto to adapter-static
import adapter from '@sveltejs/adapter-static';
...
-// add prerender entries
+// add prerender entries
kit: {
adapter: adapter(),
prerender: {
@@ -28,9 +28,9 @@
}
}
-
Disable SSR by creating src/routes/+layout.ts
export const prerender = true;
+
Disable SSR by creating src/routes/+layout.ts
export const prerender = true;
export const ssr = false;
-
Check your node.js version and make sure pnpm uses the correct one
$ node -v
+
Check your node.js version and make sure pnpm uses the correct one
$ node -v
v18.15.0
// edit .npmrc
@@ -38,7 +38,7 @@
// add your version number as so
use-node-version=18.15.0
-
Setup Tauri
$ pnpm add -D @tauri-apps/cli
+
Setup Tauri
$ pnpm add -D @tauri-apps/cli
$ pnpm tauri init
// What is your app name? notes
@@ -52,16 +52,16 @@
// What is your frontend dev command? pnpm run dev
// What is your frontend build command? pnpm run build
-
Run the app
// will be slow the first time running but after much faster
+
Run the app
// will be slow the first time running but after much faster
$ pnpm tauri dev
-
Setting up components
I recommend for beginners to go through the official Svelte tutorial here to grasp its fundamentals.
This is an excerpt of what a component is:
In Svelte, an application is composed from one or more components. A component is a reusable self-contained block of code that encapsulates HTML, CSS and JavaScript that belong together, written into a .svelte file. The 'hello world' example in the code editor is a simple component.
I'll be creating two components inside of src/lib
. One called Notes.svelte
will display all notes created. The other called CreateNote.svelte
will be a text box where we can add new notes.
Create src/lib/Notes.svelte
<script lang="ts">
+
Setting up components
I recommend for beginners to go through the official Svelte tutorial here to grasp its fundamentals.
This is an excerpt of what a component is:
In Svelte, an application is composed from one or more components. A component is a reusable self-contained block of code that encapsulates HTML, CSS and JavaScript that belong together, written into a .svelte file. The 'hello world' example in the code editor is a simple component.
I'll be creating two components inside of src/lib
. One called Notes.svelte
will display all notes created. The other called CreateNote.svelte
will be a text box where we can add new notes.
Create src/lib/Notes.svelte
<script lang="ts">
let title = "First Note";
</script>
<div id="notes">
<p> {title} </p>
</div>
-
Create src/lib/CreateNote.svelte
<script lang="ts">
+
Create src/lib/CreateNote.svelte
<script lang="ts">
let newNote;
let newTitle;
</script>
@@ -71,7 +71,7 @@
<textarea bind:value={newTitle} id="new-note-title" placeholder="Note title"></textarea>
<textarea bind:value={newNote} id="new-note-box" placeholder="Note body"></textarea>
</div>
-
A page is a route to a certain path. src/routes/+page.svelte
will be the homepage, for instance.
Import the components into the page by editing src/routes/+page.svelte
<script lang="ts">
+
A page is a route to a certain path. src/routes/+page.svelte
will be the homepage, for instance.
Import the components into the page by editing src/routes/+page.svelte
<script lang="ts">
import Notes from '$lib/Notes.svelte'
import CreateNote from '$lib/CreateNote.svelte'
</script>
@@ -80,7 +80,7 @@
<CreateNote/>
<Notes/>
</div>
-
You could place CSS style tags into each Page or Component, but I prefer a global CSS file.
Create static/global.css
/* CSS reset */
+
You could place CSS style tags into each Page or Component, but I prefer a global CSS file.
Create static/global.css
/* CSS reset */
*, *::before, *::after {
box-sizing: border-box;
}
@@ -115,7 +115,7 @@
isolation: isolate;
}
-/* Component and Page CSS */
+/* Component and Page CSS */
#container {
box-sizing: border-box;
@@ -128,9 +128,9 @@
#notes {
background: #eee;
}
-
Add it to src/app.html
inside of the head tag
<link rel="stylesheet" type="text/css" href="%sveltekit.assets%/global.css">
-
I'll be saving the notes in the frontend. For this we require Tauri's frontend API.
$ pnpm add -D @tauri-apps/api
-
We must tell Tauri which paths are available to our app. In this case I'll be writing to a file called db.bson in the user's home/notes-db directory.
Edit src-tauri/tauri.conf.json
"tauri": {
+
Add it to src/app.html
inside of the head tag
<link rel="stylesheet" type="text/css" href="%sveltekit.assets%/global.css">
+
I'll be saving the notes in the frontend. For this we require Tauri's frontend API.
$ pnpm add -D @tauri-apps/api
+
We must tell Tauri which paths are available to our app. In this case I'll be writing to a file called db.bson in the user's home/notes-db directory.
Edit src-tauri/tauri.conf.json
"tauri": {
"allowlist": {
"all": false,
"fs": {
@@ -141,10 +141,10 @@
"all": true
}
},
-
Also scroll down in the conf.json file and find "identifier" . It should be unique to your app. I'll set it to com.random.random.
"bundle": {
+
Also scroll down in the conf.json file and find "identifier" . It should be unique to your app. I'll set it to com.random.random.
"bundle": {
"identifier": "com.random.random",
}
-
Handling data in the backend
I've chosen to store the data as bson (Binary JSON). Read more about bson here. Basically it's how MongoDB stores JSON data on disk as binary.
Tauri lets the frontend pass data back and forth to the backend (Rust) using Tauri commands.
For the sake of brevity I'll just show the complete Rust backend code.
Edit src-tauri/src/main.rs
with:
#![cfg_attr(
+
Handling data in the backend
I've chosen to store the data as bson (Binary JSON). Read more about bson here. Basically it's how MongoDB stores JSON data on disk as binary.
Tauri lets the frontend pass data back and forth to the backend (Rust) using Tauri commands.
For the sake of brevity I'll just show the complete Rust backend code.
Edit src-tauri/src/main.rs
with:
#![cfg_attr(
all(not(debug_assertions), target_os = "windows"),
windows_subsystem = "windows"
)]
@@ -164,9 +164,9 @@
body: String,
}
-// builds a new Note object for the froteend
-// we then convert it to a bson document
-// lastly we convert it into a vec of bytes to store on disk (frontend handles appending then saving this to disk)
+// builds a new Note object for the froteend
+// we then convert it to a bson document
+// lastly we convert it into a vec of bytes to store on disk (frontend handles appending then saving this to disk)
#[tauri::command]
fn saveNote(title: &str, body: &str) -> Vec<u8> {
let note = Note { bson_uuid: bson::Uuid::new().to_string(), date_time: bson::DateTime::now(), title: title.to_string(), body: body.to_string() };
@@ -175,8 +175,8 @@
return bson::to_vec(&note_doc).unwrap();
}
-// after the frontend edits or deletes a note
-// it must be saved back to db.bson
+// after the frontend edits or deletes a note
+// it must be saved back to db.bson
#[tauri::command]
fn editNote(data: &str) -> Vec<u8> {
let vecNotes: Vec<Note> = serde_json::from_str(data).unwrap();
@@ -186,27 +186,27 @@
return docsArray;
}
-// loading the raw data from db.bson requires us to convert it to JSON
-// for the frontend to interact with
+// loading the raw data from db.bson requires us to convert it to JSON
+// for the frontend to interact with
#[tauri::command]
fn loadNotes(data: &str) -> String{
- // check if database is empty.
- // Return if it is otherwise the program will crash
+ // check if database is empty.
+ // Return if it is otherwise the program will crash
if data.chars().count() == 0 {
return String::from("no data");
}
- // frontend passes the database as a string array of bytes
- // parse it into bytes
+ // frontend passes the database as a string array of bytes
+ // parse it into bytes
let mybytes: Vec<u8> = data
.trim_matches(|c| c == '[' || c== ']')
.split(',')
.map(|s| s.parse().unwrap())
.collect();
- // now we iterate through the bytes and convert it
- // to a Vec of bson Document
+ // now we iterate through the bytes and convert it
+ // to a Vec of bson Document
let mut curs = Cursor::new(mybytes);
curs.set_position(0);
@@ -224,7 +224,7 @@
}
}
- // return to the frontend an array of bson documents as JSON
+ // return to the frontend an array of bson documents as JSON
return serde_json::to_string(&docs).unwrap();
}
@@ -234,21 +234,21 @@
.run(tauri::generate_context!())
.expect("error while running tauri application");
}
-
I create a Tauri app with three functions available to the frontend: saveNote, editNote, loadNotes
saveNote will be called from the frontend and be passed two values: title, body. I then create a new Note struct with those values then convert it to a bson::Document. Lastly I convert the document to a bson::Array (Vec of bytes) and return it to the frontend to handle storing it to disk.
editNote receives from the frontend an updated/modified version of the data stored on disk. The frontend requires this function to rebuild the bson database. We then return the binary bson back to the frontend to store to disk.
loadNotes takes what's stored on disk "[123],[100],etc.." and converts it to JSON for the frontend.
Also, edit src-tauri/Cargo.toml
to include bson as a dependency
[dependencies]
+
I create a Tauri app with three functions available to the frontend: saveNote, editNote, loadNotes
saveNote will be called from the frontend and be passed two values: title, body. I then create a new Note struct with those values then convert it to a bson::Document. Lastly I convert the document to a bson::Array (Vec of bytes) and return it to the frontend to handle storing it to disk.
editNote receives from the frontend an updated/modified version of the data stored on disk. The frontend requires this function to rebuild the bson database. We then return the binary bson back to the frontend to store to disk.
loadNotes takes what's stored on disk "[123],[100],etc.." and converts it to JSON for the frontend.
Also, edit src-tauri/Cargo.toml
to include bson as a dependency
[dependencies]
bson = {version = "2.6.0"}
-
Handling data in the frontend
Svelte's writable store allows each component or page to individually modify/read a global state. Whenever it changes, all components get the newly changed value.
Create and edit src/lib/store.js
import { writable } from 'svelte/store';
+
Handling data in the frontend
Svelte's writable store allows each component or page to individually modify/read a global state. Whenever it changes, all components get the newly changed value.
Create and edit src/lib/store.js
import { writable } from 'svelte/store';
import { homeDir, join } from '@tauri-apps/api/path';
import { exists, BaseDirectory, createDir, writeBinaryFile, readBinaryFile } from '@tauri-apps/api/fs';
import {invoke} from '@tauri-apps/api/tauri';
-// This value gets initialized when loadStore() is called
-// contains all bson stored on disk but as JSON
+// This value gets initialized when loadStore() is called
+// contains all bson stored on disk but as JSON
export const myStore = writable({});
-// initialize myStore with the contents in the database
-// let Rust convert the binary to an array of JSON
+// initialize myStore with the contents in the database
+// let Rust convert the binary to an array of JSON
export async function loadStore() {
let binData = await readBinaryFile('./notes-db/db.bson', {dir: BaseDirectory.Home});
invoke('loadNotes', {data: binData.toString()}).then((dat) => {
@@ -256,19 +256,19 @@
});
}
-// send the updated JSON to the backend as a string
-// the backend converts it to an array of bson documents as bytes and we store it to db.bson
+// send the updated JSON to the backend as a string
+// the backend converts it to an array of bson documents as bytes and we store it to db.bson
export async function editStore(newVal) {
let jsonToString = JSON.stringify(newVal);
- // send the updated store to the backend
+ // send the updated store to the backend
invoke('editNote', {data: jsonToString}).then((dat) => {
- // store the updated bson to disk
+ // store the updated bson to disk
let data = new Uint8Array(dat);
writeBinaryFile('./notes-db/db.bson', data, {dir: BaseDirectory.Home});
loadStore();
});
}
-
Edit src/routes/+pages.svelte
to call loadStore from above. Also to pass the store to the components.
<script>
+
Edit src/routes/+pages.svelte
to call loadStore from above. Also to pass the store to the components.
<script>
import { myStore, loadStore } from '$lib/store.js';
import { homeDir, join } from '@tauri-apps/api/path';
@@ -282,43 +282,43 @@
var db;
- // gets called whenever the page/component gets mounted
+ // gets called whenever the page/component gets mounted
onMount(async () => {
- // get the user's home directory
+ // get the user's home directory
let home = await homeDir();
- // append the directory we'll create
+ // append the directory we'll create
db = await join(home, 'notes-db');
- // check if notes-db directory exists. If not then create it
+ // check if notes-db directory exists. If not then create it
let checkDB = await exists('notes-db', {dir: BaseDirectory.Home});
if (!checkDB) {
- // if the directory doesn't exist then create it
+ // if the directory doesn't exist then create it
await createDir('notes-db', {dir: BaseDirectory.Home, recursive: true });
}
- // check if db.bson exists. If not then create it.
+ // check if db.bson exists. If not then create it.
let checkFile = await exists('./notes-db/db.bson', {dir: BaseDirectory.Home});
if (!checkFile) {
await writeBinaryFile('./notes-db/db.bson', new Uint8Array([]), {dir: BaseDirectory.Home});
}
- // load myStore with what's on disk
+ // load myStore with what's on disk
loadStore();
});
</script>
<div id="container">
<CreateNote/>
- /** Pass myStore to the component **/
+ /** Pass myStore to the component **/
<Notes allNotes={$myStore} />
</div>
-
Edit src/lib/Notes.svelte
<script lang="ts">
- // This value gets bound to myStore
- // <Notes allNotes={$myStore}>
+
Edit src/lib/Notes.svelte
<script lang="ts">
+ // This value gets bound to myStore
+ // <Notes allNotes={$myStore}>
export let allNotes;
- // bson stores Date as milliseconds
- // Convert the date from milliseconds to human readbale
+ // bson stores Date as milliseconds
+ // Convert the date from milliseconds to human readbale
function numToDate(num) {
let toInt = parseInt(num, 10);
let date = new Date(toInt);
@@ -333,7 +333,7 @@
}
</script>
-// Loop through each note and render it
+// Loop through each note and render it
<div id="notes">
{#if allNotes.length > 0}
{#each allNotes as note }
@@ -343,44 +343,44 @@
<p> { numToDate(note.date_time.$date.$numberLong) } </p>
</div>
{/each}
- /** either still loading or no data exists **/
+ /** either still loading or no data exists **/
{:else}
<p>Try saving a note</p>
{/if}
</div>
-
Edit src/lib/CreateNote.svelte
<script>
+
Edit src/lib/CreateNote.svelte
<script>
import { invoke } from '@tauri-apps/api/tauri'
import { loadStore } from '$lib/store.js'
import {BaseDirectory, writeBinaryFile, readBinaryFile} from '@tauri-apps/api/fs'
- // bound to value of title textarea
+ // bound to value of title textarea
let newNote;
- // bound to value of body textarea
+ // bound to value of body textarea
let newTitle;
- // This isn't bound to $myStore as above
- // This gets assigned the raw binary stored on disk
+ // This isn't bound to $myStore as above
+ // This gets assigned the raw binary stored on disk
let allNotes;
async function save(){
- // sets allNotes to contain the binary stored on disk
+ // sets allNotes to contain the binary stored on disk
await load();
- // Let the backend handle creating a new binary document
+ // Let the backend handle creating a new binary document
invoke('saveNote', {title: newTitle, body: newNote} ).then((response) => {
- // Here I simply merge the returned data with allNotes
+ // Here I simply merge the returned data with allNotes
let loaded = new Uint8Array(allNotes);
response = new Uint8Array(response);
let mergeArray = new Uint8Array(loaded.length + response.length);
mergeArray.set(loaded);
mergeArray.set(response, loaded.length);
- // and save it to disk
+ // and save it to disk
writeBinaryFile('./notes-db/db.bson', mergeArray, {dir: BaseDirectory.Home});
- // after saving, reload writable myStore with saved data on disk
+ // after saving, reload writable myStore with saved data on disk
loadStore();
- // empty textarea contents after save
+ // empty textarea contents after save
newNote = "";
newTitle = "";
});
@@ -397,7 +397,7 @@
<textarea bind:value={newTitle} id="new-note-title" placeholder="Note title"></textarea>
<textarea bind:value={newNote} id="new-note-box" placeholder="Note body"></textarea>
</div>
-
Lastly, I'll create another page that will be used to let the user edit a note. [slug] will be the note's UUID.
Create src/routes/edit/[slug]/+page.svelte
<script>
+
Lastly, I'll create another page that will be used to let the user edit a note. [slug] will be the note's UUID.
Create src/routes/edit/[slug]/+page.svelte
<script>
import { myStore, editStore } from '$lib/store.js';
import { page } from '$app/stores';
import { onMount} from 'svelte';
@@ -410,15 +410,15 @@
function save(){
let currentStore = $myStore;
- // get the index of the current note that we're editing
+ // get the index of the current note that we're editing
let index = currentStore.findIndex(item => item.bson_uuid === slug);
- // grab the values for editing
+ // grab the values for editing
let updatedObject = {...currentStore[index]};
- // edit the values with what's in the textareas
+ // edit the values with what's in the textareas
updatedObject.title = oneNoteTitle;
updatedObject.body = oneNoteBody;
- // update the store
+ // update the store
myStore.update(store => {
let updatedStore = [...store];
updatedStore[index] = updatedObject;
@@ -426,26 +426,26 @@
return updatedStore;
});
- // save to disk
+ // save to disk
editStore($myStore);
}
- // deletes this note
+ // deletes this note
function del() {
- // filter out this note in myStore
+ // filter out this note in myStore
myStore.update(objects => objects.filter(obj => obj.bson_uuid !== slug));
- // give the updated store to the backend
+ // give the updated store to the backend
editStore($myStore);
- // redirect to /
+ // redirect to /
window.location.href="/";
}
- // check the slug to match a UUID in myStore
+ // check the slug to match a UUID in myStore
onMount(async () => {
slug = $page.params.slug.toString();
$myStore.forEach(element => {
if (element.bson_uuid === slug) {
- // grab the values and render them to the DOM
+ // grab the values and render them to the DOM
oneNote = element;
oneNoteTitle = element.title;
oneNoteBody = element.body;
@@ -464,6 +464,6 @@
<textarea bind:value={oneNoteTitle} id="edit-note-title"></textarea>
<textarea bind:value={oneNoteBody} id="edit-note-box"></textarea>
</div>
-
Done
Running the following command will build the app into a binary located at src-tauri/target/release/
pnpm tauri build
-
My binary says it's at 10M, but I get it down to 3.0M by using UPX
$ upx notes
-
For cross platform compilation check out the official Tauri docs.
Here is the complete code on Github
Final notes
My initial goal was to have drag and drop functionality of images or videos. This would have made this tutorial way longer which is not my goal.
I chose to store the data as bson (binary JSON) as I was planning to store the images/videos as blobs. I'm not sure that this would even work as MongoDB's docs mention that a bson document can only store 16MB. I guess something like IndexedDB would serve my goals better.
\ No newline at end of file
+
Running the following command will build the app into a binary located at src-tauri/target/release/
pnpm tauri build
+
My binary says it's at 10M, but I get it down to 3.0M by using UPX
$ upx notes
+
For cross platform compilation check out the official Tauri docs.
Here is the complete code on Github
My initial goal was to have drag and drop functionality of images or videos. This would have made this tutorial way longer which is not my goal.
I chose to store the data as bson (binary JSON) as I was planning to store the images/videos as blobs. I'm not sure that this would even work as MongoDB's docs mention that a bson document can only store 16MB. I guess something like IndexedDB would serve my goals better.
\ No newline at end of file diff --git a/gpg/index.html b/gpg/index.html index b7e53a6..c6fa066 100644 --- a/gpg/index.html +++ b/gpg/index.html @@ -1,4 +1,4 @@ -pub ed25519 2023-03-11 [SC] [expires: 2024-03-10]
+ Spacedimp GPG key
pub ed25519 2023-03-11 [SC] [expires: 2024-03-10]
040BFE939DF9D855404D80BEC674E53FEB54B7F0
uid [ultimate] space dimp <spacedimp@protonmail.com>
sub cv25519 2023-03-11 [E] [expires: 2024-03-10]
@@ -28,4 +28,4 @@
=2fgc
-----END PGP SIGNATURE-----
-
\ No newline at end of file
+
SpaceDimp is dedicated to building beginner-friendly tutorials related to programming. Mainly using Rust or Go.
Topics will range from web-development, game development, and all the way down to low level Assembly.
Here's a great quote
pub fn main() {
+ Spacedimp Hello world
SpaceDimp is dedicated to building beginner-friendly tutorials related to programming. Mainly using Rust or Go.
Topics will range from web-development, game development, and all the way down to low level Assembly.
Here's a great quote
pub fn main() {
let name = "Alexander the Great";
println!("There is nothing impossible to him who will try - {}", name);
}
-
\ No newline at end of file
+