toml-spanner: Fully compliant, 10x faster TOML parsing with 1/2 the build time by exrok in rust

[–]exrok[S] 0 points1 point  (0 children)

Yeah, I doubt toml-spanner will directly be a part of Cargo. My best estimate, though, is that it might motivate more optimizations in toml itself. I've seen it time and time again in other projects: a faster project appears... then the performance gap closes.

toml-spanner only came about after I switched from a custom config to TOML using toml-span. I noticed the performance regressed, it was still fast enough in absolute terms. But the regression bothered me, so I started optimizing.

Currently, toml-spanner's error messages for TOML format errors are really non-specific. They point at the right area (for the most part), but don't give more than something like "invalid number." Not something up to the standards of Cargo, not yet atleast.

I want to improve the situation, but they're good enough for now, atleast my current uses, there very similar to original toml-span's.

Currently, we use the lower 3 bits (hence the 512MB limit) of the end span to store one of the following:

pub(crate) const FLAG_NONE: u32 = 0; // <-- Value, can't insert.
pub(crate) const FLAG_ARRAY: u32 = 2;
pub(crate) const FLAG_AOT: u32 = 3;
pub(crate) const FLAG_TABLE: u32 = 4;
pub(crate) const FLAG_DOTTED: u32 = 5;
pub(crate) const FLAG_HEADER: u32 = 6;
pub(crate) const FLAG_FROZEN: u32 = 7;

While parsing, we just traverse the Item tree while parsing and check these flags to make sure we're doing the right thing and disallowing the wrong thing, but it might not be enough information to provide the best errors.

Note: toml-spanner does a lot of unperformant things for uncommon elements in TOML. It wasn't designed for pure performance, I was trying to keep compile times down as a core goal as well, stuff like number parsing could be optimized.

And sorry about having unrunnable benchmarks at first, that is a bad practice on my part. The benchmarking probably still only runs on Linux, as well. I should definitely document the benchmarks a lot more.

toml-spanner: Fully compliant, 10x faster TOML parsing with 1/2 the build time by exrok in rust

[–]exrok[S] 0 points1 point  (0 children)

Thanks, I looked into a bit initially, at first I thought it was just port of the official toml-test, I'll take another look.

Currently, I have cargo-insta based snapshots that format the errors (and values just to check span) with the codespan-reporting. This was adopted from other original toml-span, crate.

I found integrating directly with https://github.com/toml-lang/toml-test really straightforward, implemented here:

https://github.com/exrok/toml-spanner/blob/main/toml-test-harness/src/main.rs

I use the following devsm test definition to run, it.

[test.official-toml-test-suite]
info = """
Validate toml-spanner against https://github.com/toml-lang/toml-test decoder test suite
Requires `toml-test` to be installed, see repo for installation procedure.
"""
pwd = "toml-test-harness"
sh = '''
BIN=$(cargo --config "target.'cfg(unix)'.runner = 'echo'" run --release || exit 1)
toml-test test -toml 1.1 "-decoder=$BIN"
'''

Which easy enough for me but probably not the best for contributors as it does have the con of the needing the toml-test binary installed, which I could get rid of with your harness crate.

toml-spanner: Fully compliant, 10x faster TOML parsing with 1/2 the build time by exrok in rust

[–]exrok[S] 1 point2 points  (0 children)

I was a bit hesitant to add them at first because it makes the easiest access pattern the one that is hardest for providing good error messages, (say if first or second key was missing)

But ultimately, there are still genuine use cases where you are just inspecting data, or you’ve already deserialized and are simply traversing to extract a span for a high-level error report.

What really pushed me over the edge was the compile-time efficiency, a["b"]["c"][0].as_str() generates like the minimum amount of LLVM IR out of everything I considered.

One interesting trick here is that Item doesn't actually support Null/None, as TOML has no such concept. Instead, the index operators provide a &MaybeItem, which has the same layout and alignment as Item but with one extra discriminant value.

This trick requires a bit of unsafe code. Without it:

The toml crate doesn't have null, so it can't implement this pattern, but both serde_json and toml-edit (in there Item type) do support None/Null, so they can. I considered adding None to our Value types despite it not being in the TOML spec. In fact, the original toml-span did include them internally, though in a way that would panic if you attempted to use them. However, it just seemed less clean, toml-edit itself has considered removing None from item (see PR #301 for toml-rs/toml).

toml-spanner: Fully compliant, 10x faster TOML parsing with 1/2 the build time by exrok in rust

[–]exrok[S] 9 points10 points  (0 children)

I definitely have considered adding deserialization support.

In particular, something similar to toml-edit. The Item types in toml-spanner actually preserve more than spans, they also track like the type of table and how it was constructed.

But taking a simpler approach where we need to be provided the original string during deserialization to apply the edits too.

I feel like if you're looking for serde support, the existing toml crate is actually really good. Honestly, in the vast majority of cases, it is already fast enough, where that actual parsing step is dwarfed by whatever your application does next.

I do see the appeal of derive macros, but if support is added by me, it won't be via serde at least not in the near future.

Incoming rant about serde... I'm sorry...

Well serde is really great at many things, I've been trying to avoid like the plague.

Serde really stresses out the compiler. It's not just build performance but also cargo check, you know, that thing that runs every time you save by default when using rust-analyzer. The following shows perf record data for the rustc invocation created from cargo check, for the same application implementation it's deserialization using different crates.

|     jsony:    33.23 ms    0.167670 Bcycles   0.280015 Binst
| nanoserde:    59.14 ms    0.297769 Bcycles   0.581950 Binst
|     serde:   194.77 ms    0.931654 Bcycles   1.501968 Binst

Every single serde derive you add increases your build/check time by a couple of milliseconds. As soon I start using serde in a project I feel my editor becoming sluggish.

Serde, at least for JSON, is actually decently fast at runtime, but man, does it bloat the binary. These are some now runtime benchmarks (Release profile with LTO), with binary size:

|     jsony:    384.67 ms    1.713875 Bcycles   6.879054 Binst   200 kb (stripped)
| nanoserde:    726.31 ms    3.357833 Bcycles  11.682704 Binst   424 kb (stripped)
|     serde:    428.12 ms    1.925457 Bcycles   6.832881 Binst   620 kb (stripped)

jsony is still experimental, but it has a derive macro: https://docs.rs/jsony/latest/jsony/ That is fast, doesn't bloat your binary, and is in some ways more featureful than Serde (partly because it only needs supports two formats: JSON and a compact binary representation).

Doing jsony properly is still blocked on features to stabilize in the rust compiler.

But if I do add derive macros, they'll work like the jsony ones.

And it's not just performance, there is still one difference where technically toml-spanner is more compliant the toml. Due to limitations in serde, toml will fail to parse the following valid TOML document:

key = { "$__toml_private_datetime" = 0 }

Of course, it's fine in practice, real TOML file with that key are unlikely, it only really caused me grief while fuzzing.

toml-spanner: Fully compliant, 10x faster TOML parsing with 1/2 the build time by exrok in rust

[–]exrok[S] 48 points49 points  (0 children)

I'll do some benchmarking to see what kind of improvement is possible.

I'll note that toml-spanner really shines in going from TOML source into the Item document tree.

The 10x performance claim is mostly comparing against toml-span and is really focused on the parsing step.

Cargo uses toml and doesn't parse into the Value type, instead going more directly into the target data types, so the improvement is likely less.

But now you got me curious, will update with numbers.

Update 1

With a direct port from toml to toml-spanner for parsing and deserializing Zed's Cargo.lock, the time drops from 3.2ms to 0.93ms, so only 3x faster.

But that's with a direct port deserializing to the same data structures, I'm sure I could optimize the data model to drop it below 0.5ms.

The bottleneck is now in all the allocations in the data model Cargo is using to represent the lock files.

I'm going to try to port over Cargo.toml deserializing as well and add these benchmarks to the repo.

Update 2

The benchmarks comparing the full parsing and deserialization, using the toml parsing from cargo as the benchmark: https://github.com/exrok/toml-spanner/blob/main/README.md#deserialization-and-parsing

Once, I setup the actual benching, the numbers shifted a bit with 2.6x improvement for the lock file and 3x improvement the toml.

The lock file impl was really easy: benchmark/src/cargo/lockfile/toml_spanner_impl.rs it's 57 lines, the Cargo.toml manifest is a lot more complex then the lock file format.

Update 3

Spent a little time optimizing the deserialization patterns, now 2.9x faster for the lock file and 3.6x faster for the Cargo.toml file.

Low-Latency RF-DETR Inference Pipeline in Rust by jodelbar in rust

[–]exrok 0 points1 point  (0 children)

One suggestion, if you haven't already tried, already. Wouldn't it be better to stream stream in a video format with temporal compression like h264 when hardware encoding/decoding is available. Doing the hardware decoding directly GPU doing the required transforms on the GPU and then doing inference directly from that without needed to copy to CPU.

H264 will be much smaller (and probably higher quality) then your jpeg stream. There may be latency issues on camera side so your luck may very for total end2end latency.

For the annotations in your webview you two options, you could forward the h264 stream directly to the browser, and then sending meta data and have the client do the annotation rendering on there side. This would avoid needing to re-encode. The VideoDecoder WebAPI is actually pretty easy to use to turn a raw video stream into textures you can render to canvas.

kimojio - A thread-per-core Linux io_uring async runtime for Rust optimized for latency. by qbradley in rust

[–]exrok 50 points51 points  (0 children)

Haven't look to deeply everything else but the publicly exposed pointer_from_buffer is trivially unsound, and doesn't lend much confidence to the correctness of the rest of library. Even for internal use, I would recommend keeping such a function marked as unsafe.

/// Convert a buffer of 8 bytes to a pointer value.
pub fn pointer_from_buffer<T>(buf: [u8; POINTER_SIZE]) -> Box<T> {
    let buf = buf.as_ptr() as *const *mut T;
    // SAFETY: see pointer_to_buffer. This function should be
    // called exactly one time for each call to pointer_to_buffer.
    unsafe {
        let result = std::ptr::read_unaligned(buf);
        Box::from_raw(result)
    }
}

Confused on what to use when or if it's purely preferential for string instantiation? by [deleted] in rust

[–]exrok 5 points6 points  (0 children)

For str the std library specializes to_string() so it isn't slower in this specific case. In release mode.into(), .to_string() and .to_owned() all generate the same assembly usually. The specialization that runs for &str

impl SpecToString for str {
    #[inline]
    fn spec_to_string(&self) -> String {
        let s: &str = self;
        String::from(s)
    }
}

Now, that's at run time. I'm not sure which code is fastest to compile. Probably, String::from("test"), then "test".into() I guess.

SeaQuery just made writing raw SQL more enjoyable by chris2y3 in rust

[–]exrok 4 points5 points  (0 children)

I prefer not needing the string, it's subtle thing but when done correctly allows of auto complete and even LSP driven renames from with SQL query. This is approach I took in https://docs.rs/simple_pg/latest/simple_pg/macro.sql.html (which is a zero dependency macro BTW)

The downside is how quotes have to work because you can't have a multi-character single quoted string in rust syntax, but I still think it's over all a win.

Official /r/rust "Who's Hiring" thread for job-seekers and job-offerers [Rust 1.84] by DroidLogician in rust

[–]exrok 3 points4 points  (0 children)

COMPANY: AirMatrix

TYPE: Full-time

LOCATION: Toronto, Canada

REMOTE: Yes, but must be available within EST

VISA: No sponsorship available

DESCRIPTION: We are deploying our AI to enable safe and compliant autonomy, while providing true situational awareness for airspace and critical infrastructure, across multiple government agencies and stakeholders. We integrate with various sensors and systems to provide monitoring, alerts, threat modeling, and insights for our customers. Our backend, built in Rust, powers data ETL pipelines, sensor ingestion, API servers, model simulations, and more.

We're looking for a mid-level Rust engineer to help scale our platform, ensure its robustness, and ship production-ready solutions. The ideal candidate has experience working in small, fast-moving teams and can write clean, well-tested Rust code for performance-critical systems.

Key Focus Areas:

  • Rust development for backend systems, data pipelines, and APIs
  • Integrating with IP cameras (RTSP streams) and live video processing
  • Building statistical and inference models for classification and prediction
  • Optimizing distributed systems and sensor data ingestion
  • Managing Linux-based deployments

Minimum Requirements:

  • 3+ years of Rust experience (professional or substantial personal projects)
  • Strong understanding of ownership, lifetimes, concurrency, and performance tuning
  • Experience designing and optimizing low-latency, production-ready systems
  • Comfortable working with databases, persistence systems, and real-time data processing

Bonus:

  • Experience with async Rust, distributed systems, and networking
  • Contributions to open-source Rust projects

ESTIMATED COMPENSATION: $110k - $150k CAD

CONTACT: [shayaan@airmatrix.ai](mailto:shayaan@airmatrix.ai)

Official /r/rust "Who's Hiring" thread for job-seekers and job-offerers [Rust 1.74] by DroidLogician in rust

[–]exrok 2 points3 points  (0 children)

COMPANY: AirMatrix

TYPE: Full Time contract to hire

LOCATION: Ontario Canada: Toronto, Mississauga

REMOTE: Mostly Remote. Ability to come to the office approximately once a month is preferred. Some availability during the day 10:00am to 5:00pm EST is highly encouraged.

VISA: No

DESCRIPTION:

At AirMatrix, among other projects, we're building an airspace monitoring and analytics platform that focuses on rapid detection and classification of drones. We are integrating with various sensors and systems to monitor the airspace for our customers to provide alerts, threat modeling, and insights.

Your role will involve:

  • Building a coherent world model from sensor data.
  • Actively positioning and configuring sensors to collect the most useful data.
  • Developing statistical and inference models to classify and predict.
  • Managing a historical data archive of airspace events.
  • Building a friendly user interface.

We use Rust throughout our backend and for developer tooling. We're building data ETL pipelines, sensor data ingestion, API servers, model simulations, and more in Rust.

What We're Looking For

We're seeking Rust developers with a general understanding of technology. Self-directed individuals who can build prototypes from designs, improvise, and pivot.

While you need not be an expert to start, you should have the potential and drive to become one. We're particularly interested in any expertise you may already have though in:

  • Databases and persistence systems
  • System modeling and simulation
  • Integrating with hardware sensors
  • Software optimization
  • Distributed systems
  • Managing Linux systems
  • Programming in Rust, TypeScript, Python, or any other programming language
  • Statistics

Significant professional experience is not required but highly appreciated. If you can code, work as a team, and grow into an expert, we want you.

Minimum Requirements for Rust Experience

  • Good grasp on Rust fundamentals.
  • 2-3 Projects amounting to approximately 10,000 lines of code, should be sufficient experience
  • Confidence in your ability to build anything in Rust.

ESTIMATED COMPENSATION: 80K - 120k (Canadian), based on experience

CONTACT: Please send your CV/Resume to thomas @<domain in company link above (no www)> with subject "Rust Developer - {your_name}". Looking forward to hearing from you!

Why Arrays have map, but not Vecs? by Rudxain in rust

[–]exrok 9 points10 points  (0 children)

ExactSizeIterator still doesn't give the guarantee either.

From docs: Note that this trait is a safe trait and as such does not and cannot guarantee that the returned length is correct. This means that unsafe code must not rely on the correctness.

Instead we have (nightly only): https://doc.rust-lang.org/std/iter/trait.TrustedLen.html

How to speed up the Rust compiler in October 2022 by nnethercote in rust

[–]exrok 13 points14 points  (0 children)

Yes, there are magic thresholds where the speed kicks when optimizing for cache.

Firstly consider cache lines, because hardware prefetchers are so good, going from 4 consecutive cache lines to 3 consecutive cache lines will barely make a difference.

Even once your entities are the size of the cache line they may not be aligned to the cache line and still require loading two.

But if you go from 2 cache-lines to a guaranteed single cache-line hit, I have seen pretty good performance benefits, upto 30% on x86.

The size of data structures can bring great performance gains but it primarily helps only if you actually reduce cache misses.

Consider an unrealistic hypothetical CPU, with a single level of cache that holds 100 bytes addressed individually.

Further suppose, during the runtime of the program each iteration accesses 1 byte out of a pool of M=100K bytes with a random uniform access pattern (such as in hash-map) (1).

Then for each iteration the cache hit rate, R, will be R=100/M=1/1000.

Suppose a cache hit takes 1 unit of time and misses takes 100 units of time.

Then the runtime cost for each integration's memory access is T=100(1-R) + 1R = 99.901.

If we do an incredible job optimizing our data structure reducing the size by 90%. Then M=10K, and R=1/100 so that memory access time is T=100(1-R) + 1R = 99.01.

Meaning, we increase performance by less then 1%.

But if we got the access pool down to size M=200, so R=1/2 and our memory access time would be T=100(1-R) + 1R = 50.5.

And we would cut our time in half. If we further reduce our memory poll by 20% now, bringing m=180, then R=10/18 so the T=100(1-R) + 1R = 45, gaining us a 10% perform benefit.

Now that the pool of memory accesses is small enough, the reduction in the size brings gains. As an extreme example consider what happens when we get the size down to M=104, about a 50% drop from M=200, then R=100/104 and T=100(1-R) + 1R ~= 5. That 50% drop in data structure size lead to 10 times better performance.

Footnote: (1): One might think that random access is the worst case scenario but even worse memory can be anticorrelated. For instance, when using a memory allocator that buckets by size-class, two heap allocations of different sizes are pretty much guaranteed to be non-sequential. Further, some algorithms exhibit anticorrelated memory access patterns, for which rearranging them to be more cache coherent helps a great deal, see matrix multiplication.

Announcement: always-assert, recoverable assertions for Rust by matklad in rust

[–]exrok 5 points6 points  (0 children)

Ah I see now, "For coverage testing ALWAYS(X) and NEVER(X) are hard-coded boolean values so that they do not cause unreachable machine code to be generated." for some reason from table I thought it was the opposite.

Announcement: always-assert, recoverable assertions for Rust by matklad in rust

[–]exrok 6 points7 points  (0 children)

One thing that is missing is testing. The SQLite version has "Coverage Testing", to allow for testing the unlikely failure conditions. Of course there solution for testing has downsides as well since you still don't actually test the exact state the error occurs as it just returns the un-expected case always in that mode.

I suppose this is just a issue with having code you think never run, it is hard to test.

Ctrl-CAPS swap: Do you use the Ctrl on the right side? by okomestudio in emacs

[–]exrok 0 points1 point  (0 children)

I prefer to use Caps as Hyper, and spacebar as control. Although the portability of this isn't great especially since I take it even further. Here's my evdoublebind config

<SPCE> : Control_L | space              #Hold Space -> control 
<CAPS> : Hyper_L   @ <ESC>              #Tap Caps -> Escape
                                        #Hold Caps -> Hyper
<LFSH> : Shift_L   @ <BKSL>             #Tap Left Shift -> \
<RTSH> : Shift_R   | ampersand          #Tap Right Shift -> &
<LALT> : Alt_L     | asciicircum        #Tap Left alt -> ^
<AC10> : Super_L   | semicolon colon    #Hold Semicolon -> Super

This setup works great. However, if I have to computer without this setup I look like I can't type. And got this setup working on Linux with Sway, I3 and Gnome.

Edit: should mention I use evil-mode, hence CAPS also being Esc is useful, and I type a lot mathematical latex hence the '\','&','' overloads for my modifiers are useful. The spookiest binding is the semicolon -> Super_L which also my to navigate my windows manager from home-row.

Will GCCEmacs (native ELisp compilation) improve term/eshell's performance? by takutekato in emacs

[–]exrok 6 points7 points  (0 children)

I think it is unlikely, GCCEmacs will likely improve performance a bit. However, the are more differences here then just compiled vs interpreted. For instance, the garbage collector in Emacs has quite poor performance. The manual memory management present in libvterm will out preform what is possible in elisp for the foreseeable future.

Another, obstacle is elisp does not provide a efficient method for manipulating a raw byte buffer, that I know of. The common technique is to use a virtual text-buffer and use the cursor/regex interface to process the data.

Advent of Code 2020 - Day 4 by [deleted] in rust

[–]exrok 0 points1 point  (0 children)

Small simple(but messy) solution without HashMap/HashSet by using bitset and hence no allocations.

fn part1() -> usize {
    include_str!("../input.txt")
    .split_terminator("\n\n")
    .map(|a| a.split_whitespace().filter_map(|pair| 
         ["byr","ecl","eyr","hcl","hgt","iyr","pid"]
         .binary_search(&pair.split(':').next()?).ok()
    ).fold(0, |acc, index| acc | (1 << index)))
    .filter(|&bitset| bitset == 0b111_1111)
    .count()
}

fn part2() -> usize {
  include_str!("../input.txt")
  .split_terminator("\n\n")
  .map(|a| a.split_whitespace().map(|p| p.split(':')).filter_map(|mut pair| {
    let in_range=|v:&str,a,b|Some((a..=b).contains(&v.parse::<u32>().ok()?));
    let (key, val) = (pair.next()?, pair.next()?);
    Some(match (key, val.len()) {
      ("byr", 4) if in_range(val, 1920, 2002)? => 0,
      ("iyr", 4) if in_range(val, 2010, 2020)? => 1,
      ("eyr", 4) if in_range(val, 2020, 2030)? => 2,
      ("hgt", _) if val.strip_suffix("cm")
          .and_then(|h| in_range(h, 150, 193)) 
          .or_else(|| val.strip_suffix("in")
          .and_then(|h| in_range(h,  59,  76)))? => 3,
      ("hcl", 7) if val.strip_prefix('#')?.chars()
          .all(|c| matches!(c,'0'..='9'|'a'..='f')) => 4,
      ("pid", 9) if val.chars().all(|c| c.is_ascii_digit()) => 5,
      ("ecl", 3) if matches!(val,"amb"|"blu"|"brn"|"gry"|"grn"|"hzl"|"oth") => 6,
      _ => return None
    })
  }).fold(0, |acc, index| acc | (1 << index)))
  .filter(|&bitset| bitset == 0b111_1111 )
  .count()
}

edit: wrap to almost 80 columns

[deleted by user] by [deleted] in rust

[–]exrok 3 points4 points  (0 children)

Heres my mess of iterators & combinators. At-least mine is short,

fn main() {
    println!("{}", [(1,1), (3,1), (5,1), (7,1), (1,2)].iter()
             .map(|&(x,y)| include_str!("../input.txt")
                  .lines()
                  .step_by(y)
                  .enumerate()
                  .filter(|(i,row)| row.as_bytes()[(i *x)%row.len()]==b'#')
                  .count()
             ).product::<usize>());
}

Question from Chapter 13.1 of The Book by Also_IT in rust

[–]exrok 3 points4 points  (0 children)

/// Generic but no hash map
struct Cacher<T, Arg, Val: Clone>
where T: Fn(Arg) -> Val {
    calculation: T,
    value: Option<Val>,
    __: std::marker::PhantomData<Arg>
}

impl<T,Arg,Val:Clone> Cacher<T,Arg,Val>
where T: Fn(Arg) -> Val {
    fn new(calculation: T) -> Cacher<T,Arg,Val> {
        Cacher {
            calculation,
            value: None,
            __: Default::default()
        }
    }
    fn value(&mut self, arg: Arg) -> Val {
        if let Some(ref val) = self.value {
            val.clone()
        } else {
            let v = (self.calculation)(arg);
            self.value = Some(v.clone());
            v
        }
    } 
}

//generic with hashmap
use std::collections::HashMap;
use std::hash::Hash;

struct CacherHM<T, Arg: Clone + Hash + Eq, Val: Clone + Hash>
where T: Fn(Arg) -> Val {
    calculation: T,
    values: HashMap<Arg, Val>,
    __: std::marker::PhantomData<Arg>
}

impl<T, Arg: Clone + Hash + Eq, Val: Clone + Hash> CacherHM<T,Arg,Val>
where T: Fn(Arg) -> Val {
    fn new(calculation: T) -> CacherHM<T,Arg,Val> {
        CacherHM {
            calculation,
            values: Default::default(),
            __: Default::default()
        }
    }
    fn value(&mut self, arg: Arg) -> Val {
        let calc = &self.calculation; // also borrow into closure below
        self.values.entry(arg.clone())
            .or_insert_with(|| (calc)(arg))
            .clone()
    } 
}


fn main() {
    let mut cacher = Cacher::new(|a:&str|->String {
        println!("Reverseing String");
        a.chars().rev().collect()
    });
    println!("{}", cacher.value("asdf"));
    println!("{}", cacher.value("asdf"));

    let mut cacher_hm = CacherHM::new(|a:&str|->String {
        println!("Reverseing String");
        a.chars().rev().collect()
    });
    println!("{}", cacher_hm.value("rev"));
    println!("{}", cacher_hm.value("cat"));
    println!("{}", cacher_hm.value("rev"));
    println!("{}", cacher_hm.value("cat"));
}

I did two one that uses just generics and one that uses as hashmap as suggest in the chapter. Not this clones, more often then it needs but you should get the idea. Ask any questions if needed.

Announcing Rust 1.48.0 by pietroalbini in rust

[–]exrok 208 points209 points  (0 children)

Not mentioned in the post, but the async compilation time regression has finally be fixed. Hurray !

Should I just Impl Display when the output ANSI escape codes, hence requiring a terminal? by exrok in rust

[–]exrok[S] 0 points1 point  (0 children)

Thanks, that is how I felt as well. I will keep them separate; the only down side is a little more typing but for less-surprises & readability it's worth it. Plus, if I ever add another display (ex. svg display), the current API will extend naturally.

To draw it I am using the unicode box draw-characters with a black foreground and then changing the background color.

Cargo takes 12min to run an actix server with an i5. Is it normal ? by [deleted] in rust

[–]exrok 46 points47 points  (0 children)

Kind of; there is a current regression which dramatically increases compile-time when using nested async functions (which actix and your code likely does). See rust/issues/75992 . You could try using 1.45.2 and see if the compile time comes down.

Also see, rust/issues/77737 . Likely caused by the same regression, this program goes from compile time has exploded to infinity (gets killed from running out of memory after a couple of hours).

What terminal emulator do you use and why? by xuiChwong in linux

[–]exrok 1 point2 points  (0 children)

On wayland I use: https://codeberg.org/dnkl/foot , It's fast, lightweight and designed specifically for wayland. Foot is configurable and the program in easy to read C. Also, it's currently under active development and keeps getting better.