Leap seconds still make time go forwards, not backwards. NTP clients would also resolve small time discrepancies while still advancing forwards prior to the next time sync.
Leap seconds still make time go forwards, not backwards. NTP clients would also resolve small time discrepancies while still advancing forwards prior to the next time sync.
I didn’t say Unix time, I said UTC. And no it won’t report negative time, not unless somehow the system clock was modified while it was running…
UTC always goes forward regardless of the timezone and local time. That is why you should use it. To take my EPG situation above, I stored program start / end times in UTC so they would render properly even if DST kicked in or not during the middle of the program.
Yes as long as the rules are known, but it’s really just better to do things sanely and leave no margin of doubt.
True but so do most computers. Computers have a database of timezones and time offsets around the world. Depending on the UTC date and time, and your current timezone it will look up what offset to apply to show the local time. The database is very gnarly since rules change over time, e.g. maybe in the 70s some countries had longer DST to counteract oil shortages.
I once developed an electronic program guide for a cable TV company in New Zealand and I’d lose my mind if I had to use timezones. The basic rule of thumb was:
a) Internally you use UTC religiously. UTC is the same everywhere on Earth, time always goes forward, most languages have classes that represent instants, durations etc. In addition you make damned sure your server time is correct and UTC.
b) You only deal with timezones when presenting something to a user or taking input from a user
Prior to that I had worked for a US trading company that set all their servers to EST and was receiving trades through the system which expressed time & date ambiguously. Just had to assume everywhere that EST was the default but it was just dumb programming and I bet to this day every piece of code they develop has time bugs.
Rust isn’t really OOP like C#, Java or C++ - it has structs with functions that you could consider an “object” but there is no inheritance. Instead Rust uses traits which are a little bit like interfaces in some languages.
The way the kernel is using Rust at the moment is to produce safe bindings for modules to be written in Rust, i.e. you can create a module in Rust source which will be correctly loaded up, the code is safe by default and will have access to kernel services via bindings. I expect over time that more of the kernel will become Rust, but the biggest impediment right now is Rust relies on LLVM and LLVM only supports a subset of targets that a kernel could potentially support with another compiler like gcc.
Predominantly C. But even the kernel is beginning to use Rust as a way of avoiding entire classes of programming error.
The only reason people use JS is because it’s the defacto language of browsers. As a language it’s dogshit filled with all kinds of unpleasant traps.
Here is a fun one I discovered the other day:
new Date('2022-10-9').toUTCString() === 'Sat, 08 Oct 2022 23:00:00 GMT'
new Date('2022-10-09').toUTCString() === 'Sun, 09 Oct 2022 00:00:00 GMT'
So padding a day of the month with a 0 or not changes the result by 1 hour. Every browser does the same so I assume this is a legacy thing. It’s supposed to be padded but any sane language would throw an exception if it was malformed. Not JavaScript.
Lemmy is written in Rust. There might be bits of C at the periphery behind bindings.
It would be better to “git clone” a repo under threat of removal than fork it in Github. That way an entire copy of its history is preserved. It’s possible the forks still exist for now, even if Yuzu removes their official repo, but if Nintendo serves Github the legal paperwork then the forks will get blasted.
That said if someone clones the repo, they probably ought to think twice before putting it back in the cloud without sanitizing / reconstructing the branches & history absent of the bits that got Yuzu into trouble in the first place.
Yuzu gave them the opening to sue though. If they had been more circumspect - “Oh this is to develop homebrew / indie games nudge nudge” then maybe Nintendo wouldn’t have unleashed the lawyers or done so ineffectively. After all it wouldn’t be Yuzu’s fault if some wicked website corrupted their pure intentions by releasing device keys or patches that allowed their emulator run commercial games. But they were more blatant than that.
Also from an empathic perspective, of course Nintendo were going to sue. Yuzu should have known they would since that’s what console platforms do when something interferes with their profits. Yuzu is doubly bad since it interferes with hardware sales and game sales unlike custom firmware / cartridges which only affect game sales.
Of course the genie is already out of the bottle. Yuzu’s source code and binaries were on github for anyone to clone / fork. All the games are out in the wild. The piracy will carry on. I think it’s fair to say the NSP is effectively dead as a platform at this point. If a NSP2 turns up this year, as rumored, then I expect it will have revised anti-piracy measures and potentially a heavy online service aspect to go with it - it’s far easier to detect pirates and wield the banhammer when a device is online.
The problem is, that most languages have no native support other than 32 or 64 bit floats and some representations on the wire don’t either. And most underlying processors don’t have arbitrary precision support either.
So either you choose speed and sacrifice precision, or you choose precision and sacrifice speed. The architecture might not support arbitrary precision but most languages have a bignum/bigdecimal library that will do it more slowly. It might be necessary to marshal or store those values in databases or over the wire in whatever hacky way necessary (e.g. encapsulating values in a string).
We had tens of thousands of lines in our rake files to build a bunch of targets, none of which were even Ruby. I think if I needed to build another complex build system that was a directed acyclic graph I think I’d use Gradle, for a several reasons - we had some Java targets so we save on an additional developer runtime, it would run faster & Gradle is more mainstream and easy to get various plugins & documentation for.
It probably wasn’t a big deal when it was a niche project until Twitter imploded. Then all the public instances got overloaded with new users and the limits became obvious.
A better design is Lemmy which is written in Rust so it has far more scalability. It’s compiled and because it’s tokio / actix based, it can also do a lot more stuff asynchronously so it’s not spawning thousands of threads to cope with concurrent requests.
There is a lot of magic in Java. Try Spring Boot for example, and things magically connect together with annotations, or somehow methods get injected onto interface on the fly, or an http interface maps onto a function with parameters because the runtime is doing it. This is most evident when you set a break point in some class and there might be 4 or 5 mystery functions it passed through between it and where you thought it was calling from. Sl4j, Lombok, Hibernate are doing the same kind of thing.
I wrote extensively in Ruby but for Rake - using Ruby as a build system. Can’t say I liked the language although it was okay for how we used it. We have 20 sub projects with some very complex build targets and dependency scanning going on and the Rake syntax was okay. Personally I think its biggest shortcoming was the documentation was very poor and stuff like gems felt primitive compared to other package management systems. One thing I liked from the language was blocks could evaluate to a value which I really use a lot in Rust too.
I think if I were doing an acyclic dependency build system these days I’d use Gradle probably.
As for Rails I expect failed to catch on because even compared to Python, Ruby is a slow language. And Python isn’t fast by any stretch. Projects that started with Rails hit the performance brick wall and moved to something else.
I actually got off my arse and did some productive programming over the Christmas break. Spent too long vegetating in front of the computer watching YouTube vids or playing games.
If you look at any modern desktop application, e.g. those built over GTK or QT, then they’re basically rendering stuff into a pixmap and pushing it over the wire. All of the drawing primitives made X11 efficient once upon a time are useless, obsolete junk, completely inadequate for a modern experience. Instead, X11 is pushing big fat pixmaps around and it is not efficient at all.
So I doubt it makes any difference to bandwidth except in a positive sense. I bet if you ran a Wayland desktop over RDP it would be more efficient than X11 forwarding. Not familiar with waypipe but it seems more like a proxy between a server and a client so it’s probably more dependent on the client’s use/abuse of calls to the server than RDP is when implemented by a server.
It doesn’t work like that. UTC goes forward always. Leap seconds are scheduled and known in advance. NTP time services will just smear time advancement a little to account for an additional second. Time never has to go backwards. This is how Google does it.