The maintenance is too high.
acquired knowledge spotted
The maintenance is too high.
acquired knowledge spotted
I will let you on a little secret.
The best “support” you can get is support from upstreams directly (I’m involved in both sides of that equation). But upstreams will often only “support” you when you 1. run the latest stable version 2. the upstream source code wasn’t patched willy-nilly by the packager (your distro).
So the best desktop linux experience comes with using rolling distro that gives you such packages, with Arch being the most prominent example.
The acquired knowledge that argues stability and tells you otherwise is a meme.
a better solution would be to add a method called something like ulock that does a combined lock and unwrap.
That’s exactly what’s done above using an extension trait! You can mutex_val.ulock()
with it!
Now that I think about it, I don’t like how unwrap can signal either “I know this can’t fail”, “the possible error states are too rare to care about” or “I can’t be bothered with real error handing right now”.
That’s why you’re told (clippy does that i think) to use expect
instead, so you can signal “whatever string” you want to signal precisely.
would you argue that it’s impossible to write a"hello, world" program in C++
bent as expected
This proposal is just a part of a damage control campaign. No (supposedly doable) implementation will ever see the light of day. Ping me when this is proven wrong.
The only (arguably*) baseless claim in that quote is this part:
it’s theoretically possible to write memory-safe C++
Maybe try to write more humbly and less fanatically, since you don’t seem to be that knowledgable about anything (experienced in other threads too).
* It’s “theoretically possible” to write memory-safe assembly if we bend contextual meanings enough.
if you’re really that bothered…
use std::sync::{Mutex, MutexGuard};
trait ULock<'a> {
type Guard;
fn ulock(&'a self) -> Self::Guard;
}
impl<'a, T: 'a> ULock<'a> for Mutex<T> {
type Guard = MutexGuard<'a, T>;
fn ulock(&'a self) -> Self::Guard {
self.lock().unwrap()
}
}
or use a wrapper struct, if you really really want the method to be called exactly lock
.
If lock-ergonomicsⓒ is as relevant to you as indexing, you’re doing it wrong.
I would rather take indexing returning Result
s than the other way around.
One can always wrap any code in {||{ //.. }}()
and use question marks liberally anyway (I call them stable try blocks 😉).
…and that’s how you drive up metrics.
Gross!
Unprofessional.
This is why no one takes Rust seriously.
but futures only execute when polled.
The most interesting part here is the polling only has to take place on the scope itself. That was actually what I wanted to check, but got distracted because all spawns are awaited in the scope in moro
’s README example.
async fn slp() {
tokio::time::sleep(std::time::Duration::from_millis(1)).await
}
async fn _main() {
let result_fut = moro::async_scope!(|scope| {
dbg!("d1");
scope.spawn(async {
dbg!("f1a");
slp().await;
slp().await;
slp().await;
dbg!("f1b");
});
dbg!("d2"); // 11
scope.spawn(async {
dbg!("f2a");
slp().await;
slp().await;
dbg!("f2b");
});
dbg!("d3"); // 14
scope.spawn(async {
dbg!("f3a");
slp().await;
dbg!("f3b");
});
dbg!("d4");
async { dbg!("b1"); } // never executes
});
slp().await;
dbg!("o1");
let _ = result_fut.await;
}
fn main() {
let rt = tokio::runtime::Builder::new_multi_thread()
.enable_all()
.build()
.unwrap();
rt.block_on(_main())
}
[src/main.rs:32:5] "o1" = "o1"
[src/main.rs:7:9] "d1" = "d1"
[src/main.rs:15:9] "d2" = "d2"
[src/main.rs:22:9] "d3" = "d3"
[src/main.rs:28:9] "d4" = "d4"
[src/main.rs:9:13] "f1a" = "f1a"
[src/main.rs:17:13] "f2a" = "f2a"
[src/main.rs:24:13] "f3a" = "f3a"
[src/main.rs:26:13] "f3b" = "f3b"
[src/main.rs:20:13] "f2b" = "f2b"
[src/main.rs:13:13] "f1b" = "f1b"
The non-awaited jobs are run concurrently as the moro docs say. But what if we immediately await f2?
[src/main.rs:32:5] "o1" = "o1"
[src/main.rs:7:9] "d1" = "d1"
[src/main.rs:15:9] "d2" = "d2"
[src/main.rs:9:13] "f1a" = "f1a"
[src/main.rs:17:13] "f2a" = "f2a"
[src/main.rs:20:13] "f2b" = "f2b"
[src/main.rs:22:9] "d3" = "d3"
[src/main.rs:28:9] "d4" = "d4"
[src/main.rs:24:13] "f3a" = "f3a"
[src/main.rs:13:13] "f1b" = "f1b"
[src/main.rs:26:13] "f3b" = "f3b"
f1 and f2 are run concurrently, f3 is run after f2 finishes, but doesn’t have to wait for f1 to finish, which is maybe obvious, but… (see below).
So two things here:
defer_to_scope()
be confusing if the job is awaited in the scope?I skimmed the latter parts of this post since I felt like I read it all before, but I think moro
is new to me. I was intrigued to find out how scoped span
exactly behaves.
async fn slp() {
tokio::time::sleep(std::time::Duration::from_millis(1)).await
}
async fn _main() {
let value = 22;
let result_fut = moro::async_scope!(|scope| {
dbg!(); // line 8
let future1 = scope.spawn(async {
slp().await;
dbg!(); // line 11
let future2 = scope.spawn(async {
slp().await;
dbg!(); // line 14
value // access stack values that outlive scope
});
slp().await;
dbg!(); // line 18
let v = future2.await * 2;
v
});
slp().await;
dbg!(); // line 25
let v = future1.await * 2;
slp().await;
dbg!(); // line 28
v
});
slp().await;
dbg!(); // line 32
let result = result_fut.await;
eprintln!("{result}"); // prints 88
}
fn main() {
// same output with `new_current_thread()` of course
let rt = tokio::runtime::Builder::new_multi_thread()
.enable_all()
.build()
.unwrap();
rt.block_on(_main())
}
This prints:
[src/main.rs:32:5]
[src/main.rs:8:9]
[src/main.rs:25:9]
[src/main.rs:11:13]
[src/main.rs:18:13]
[src/main.rs:14:17]
[src/main.rs:28:9]
88
So scoped spawn
doesn’t really spawn tasks as one might mistakenly think!
Because non-open ones are not available, even for a price. Unless you buy something bigger than the “standard” itself of course, like a company that is responsible for it or having access to it.
There is also the process of standardization itself, with committees, working groups, public proposals, …etc involved.
Anyway, we can’t backtrack on calling ISO standards and their likes “open” on the global level, hence my suggestion to use more precise language (“publicly available and sharable”) when talking about truly open standards.
The term open-standard does not cut it. People should start using “publicly available and sharable” instead (maybe there is a better name for it).
ISO standards for example are technically “open”. But how relevant is that to a curious individual developer when anything you need to implement would require access to multiple “open” standards, each coming with a (monetary) price, with some extra shenanigans [archived] on top.
IETF standards however are actually truly open, as in publicly available and sharable.
It implies that the value of their policy work is significantly below…
It’s always safe to assume that value to be negative unless proven otherwise actually.
The LARPing levels in moronix comments are higher than usual, but the comedic value is still not lost.
Monthly Reminder: High or low, all Linux usage stats are fake.
I think your second link isn’t what you intended?
You scared me for a moment there. I don’t know why you thought that.
Needless to say, even with the first example, metavar expressions are not strictly needed here, as using a second pattern and recursing expansions would work.
But I wanted to showcase the power of ${ignore}
, as it can be cleaner and/or more powerful in some cases where extra patterns and recursing expansions can get messy and hard to track.
Is this crazy?
A general repeat macro that works on stable Rust would work too of course.
BTW, the snippet I pointed to, and the whole match block, is not incoherent. It’s useless.
If you’re not into tiling, install openbox and a panel of your choosing. You will quickly find that you don’t need a DE at all.