• 1 Post
  • 149 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle
  • Ok, I’m back. I did some quick research and it looks like that Mikrotik switch should be able to do line-rate between the SFP+ ports. That’s important because if it was somehow doing non-hardware switching, the performance would be awful. That said, my personal opinion is that Mikrotik products are rather unintuitive to use. My experience has been with older Ubiquiti gear and even older HP Procurve enterprise switches. To be fair, though, prosumer products like from Mikrotik have to make some tradeoffs compared to the money-is-no-object enterprise space. But I wasn’t thrilled with the CLI on their routers; maybe the switches are better?

    Moving on, that NIC card appears to be equivalent to an Intel x520, so drivers and support should exist for any mainline OS you’re running. For 10 Gbps beyond, I agree that you want to go with pluggable modules when possible, unless you absolutely know that the installation will never run fibre.

    I will note that 10 Gbps over Cat 5e – while not mentioned in the standard and thus officially undefined behavior – it has been reported to work over short distance, in the range of 15-30 meters by some accounts. The twisted pair Ethernet specs only call out the supported wire types by their category designation but ultimately, it’s the signal integrity of the differential signals that matter. Cat 3, 5, 5e, 6, etc are just increasingly better at maintaining a signal over a distance. This being officially undefined just means that if it doesn’t work, the manufacturer told no lie.

    But you’re right to avoid 10 Gbps twisted pair, as the xcvrs are expensive, thermally ridiculous, power hungry, and themselves have length limits shorter than what the spec allows, because it’s hard to stuff all the hardware into an SFP+ pluggable module. Whereas -SR optics are cheap and DACs even cheaper (when the distance is short enough). No real reason to adopt twisted pair 10 Gbps if fibre is an option.

    That said, I didn’t check the compatibility of your selected SR transceiver against your NICs and switch, so I’ll presume you’ve done your homework for that.

    Going back to the x8 card in a electrically x4 slot, there’s a thing in the PCIe spec where the only two widths that are mandatory to support are: 1) the physical card width, and 2) the 1x width. No other widths are necessarily supported. So there’s a small possibility that the NIC will only connect at 1x PCIe, which will severely limit your performance. But this is kinda pathological and 9 out of 10 PCIe cards will do graceful width reduction, beyond what the PCIe spec demands. And being an x520 variant, I would expect the driver to have no issue with that, as crummy PCIe drivers can break when their bad assumptions fall through.

    Overall, I don’t see any burning red flags with your plan. I hope you’ll update us with new posts as things progress!


  • I’ll have to review your post in greater detail in a bit, but some initial comments: cross vendor compatibility of xcvrs was a laudable goal failed only by protectionist business interests and the result is that the only real way to validate compatibility is to try it.

    Regarding your x4 slot and the NICs being x8: does your mobo have the slot cut in such a way that it can accept a physical x8 card even though only the x4 lanes are electrically connected?

    For keystone jacks, I personally use them but I try not to go wild with them, since just like with electrical or RF connectors, each one adds some amount of loss, however minor. Having one keystone jack at each end of the fibre seems like it shouldn’t be an issue at all.

    Final observation for now: this plan sets up a 10 Gb network with fibre, but your use-case for now is just for a bigger pipe to your file server. Are you expecting to expand your use-cases in future? If not, the same benefit can be had by a direct fibre run from your single machine to your file server. Still 10 Gbps but no switch needed in the middle, and you have less risk of cross vendor incompatibility.

    I’m short on time rn, but I’ll circle back with more thoughts soon.



  • I read this, and thought it was kind of all over the place. Even the first “falsehood” about always immediately crashing is answered as “true for some languages but not some others”. Even the motion of superlatives in CS like “always” and “never” rarely hold, including this very sentence and almost certainly when talking about multiple programming languages.

    And on that point, it’s a minor quibble, but while Go’s nil pointers are similar to C null pointers and Rust’s null raw pointers, it’s a strange thing to have the title be about falsehoods about null pointers.

    But then much of the other supposed falsehoods are addressed only for the C language, such as null deference being UB or not.

    1. On platforms where the null pointer has address 0, C objects may not be placed at address 0.

    I would like to see a ©itation [pun intended] for this being a supposed falsehood, since my understanding is that if an implementation uses 0x0 as the null pointer, then the check for a null pointer is to check if it’s equal to 0x0, which would require that no “thing” in C use that address.



  • While I get your point that Python is often not the most appropriate language to write certain parts of an OS, I have to object to the supposed necessity of C. In particular, the bolded claim that an OS not written in C is still going to have C involved.

    Such an OS could instead have written its non-native parts using assembly. And while C was intentionally designed to be similar to assembly, it is not synonymous with assembly. OS authors can and do write assembly when even the C language cannot do what they need, and I gave an example of this in my comment.

    The primacy of C is not universal, and has a strong dependency on the CPU architecture. Indeed, there’s a history of building machines which are intended for a specific high-level language, with Lisp Machines being one of the most complex – since Lisp still has to be compiled down to some sort of hardware instructions. A modern example would be Java, which defines the programming language as well as the ISA and byte code: embedded Java processors were built, and thus there would have been zero need for C apart from legacy convenience.


  • As it happens, this is strikingly similar to an interview question I sometimes ask: what parts of a multitasking OS cannot be written wholly in C. As one might expect, the question is intentionally open-ended so as to query a candidate’s understanding of the capabilities and limitations of the C language. Your question asks about Python, but I posit that some OS requirement which a low-level language like C cannot accomplish would be equally intractable for Python.

    Cutting straight to the chase, C is insufficient for initializing the stack pointer. Sure, C itself might not technically require a working stack, but a multitasking operating system written in C must have a stack by the time it starts running user code. So most will do that initialization much earlier, so that the OS’s startup functions can utilize the stack.

    Thjs is normay done by the bootloader code, which is typically written in assembly and runs when the CPU is taken out of reset, and then will jump into the OS’s C code. The C functions will allocate local variables on the stack, and everything will work just fine, even rewriting the stack pointer using intrinsics to cause a context switch (although this code is often – but not always – written in assembly too).

    The crux of the issue is that the initial value of the stack pointer cannot be set using C code. Some hardware like the Cortex M0 family will initialize the stack pointer register by copying the value from 0x00 in program memory, but that doesn’t change the fact that C cannot set the stack pointer on its own, because invoking a C function may require a working stack in the first place.

    In Python, I think it would be much the same: how could Python itself initialize the stack pointer necessary to start running Python code? You would need a hardware mechanism like with the Cortex M0 to overcome this same problem.

    The reason the Cortex M0 added that feature is precisely to enable developers to never be forced to write assembly for that architecture. They can if they want to, but the architecture was designed to be developed with C exclusively, including interrupt handlers.

    If you have hardware that natively executes Python bytecode, then your OS could work. But for x86 platforms or most other targets, I don’t think an all-Python, no-assembly OS is possible.



  • I guess your nephew can start studying to become a network engineer now lol

    In all seriousness, a 16 port managed switch exposes enough complexity to develop a detailed understanding of Ethernet and Layer 2 concepts, while not having to commit to learning illogical CLI commands to achieve basic functionality. 16 ports is also enough to wire up a non-trivial network, with ports to spare for exercising loop detection/protection or STP, but doesn’t consume a lot of electricity.

    I would pair that switch with a copy of The All-New Switch Book, 2nd Edition to go over the networking theory. Yes, that book is a bit dated but networking fundamentals have not changed that much in 15 years. Plus, it can be found cheap, or on the high seas. It’s certainly not something to read cover-to-cover, since you can skip anything about ATM networks.

    Then again, I think students might just simulate switch behaviors and topologies in something like GNS3, so no hardware needed at all.


  • I suspect that PG&E’s smart meters might: 1) support an infrared pulse through an LED on the top of the meter, and 2) use a fairly-open protocol for uploading their meter data to the utility, which can be picked up using a Software Defined Radio (SDR).

    Open Energy Monitor has a write-up about using the pulse output, where each pulse means a quantity of energy was delivered (eg 1 Watt-hour). So counting 1000 of such pulses would be 1 kWh, and that would be a way to track your energy consumption for any timescale.

    What it won’t do is provide instantaneous power (ie kW drawn at this very moment) because the energy must accumulate to the threshold before sending a pulse. For example, a 9 Watt LED bulb that is powered on would only cause a new pulse every 6.7 minutes. But for larger loads, the indication would be very quick; a 5000 W dryer would emit a new pulse after no more than 0.72 seconds.

    The other option is decoding the wireless protocol, which people have done using FOSS software. An RTL-SDR receiver is not very expensive, is very popular, and can also be used for other purposes besides monitoring the electric meter. Insofar as USA law is concerned, unencrypted transmissions are fair game to receive and decode. This method also has a wealth of other useful info in the data stream, such as instantaneous wattage in addition to the counter registers.



  • As a side note, the development of the corridor would not only improve connectivity of Central California residents to the Bay Area and SoCal, but also to the Sacramento region. Although the Capitol Corridor does reach Sacramento via the Bay Area, this section is crowded by commuters and the train must navigate the slow curves of San Pablo Bay west of Martinez. From the thumbnail above, someone in SLO might have a quicker journey to Sacramento via Paso Robles and Hanford, bypassing the Bay Area entirely.

    The inland town of Hanford is presently served by the San Joaquins but is also home to a future High Speed Rail station, as part of the first operating segment from Bakersfield to Merced. It is reasonably expected that when that high speed section is complete, travelers from the Paso Robles bus can board a high speed train north to Merced, with a cross-platform guaranteed transfer to the conventional San Joaquins train waiting at the station to continue north to its existing destinations of Sacramento or Oakland.

    Though as it happens, the San Joaquins itself is pursuing an expansion to the north, beyond Sacramento towards Chico, overlapping communities which are served only by the two one-way Coast Starlight trains. This expansion will use UP’s Sacramento Subdivision that runs north-south.

    An odd quirk of Sacramento is that the principal train station sits only on UP’s Martinez Subdivision, which runs west to the Bay Area and east to Reno. The only junction between the Martinez and Sacramento subdivisions is Haggin Junction east of the station. But Haggin is not a complete junction, and northbound traffic on the Sacramento Subdivision must pass north of and then reverse into the junction to enter Sacramento station to the west.

    This is not ideal for the San Joaquins northern expansion, and so they’ve decided to outright skip the main Sacramento station in their plans. Accordingly, for someone in SLO heading to Chico, it is indeed more advantageous to travel inland by bus and then train, to avoid the Bay Area congestion and a connection from the Capitol Corridor somewhere in Sacramento. But for a destination east of Sacramento, the Capitol Corridor route would be more advantageous.

    No plans exist to upgrade Haggin Junction, nevermind the disruption it would cause to downtown Sacramento. Instead, the transfer to Sacramento station would likely happen from a new San Joaquins station linked to SacRT’s Gold LRT line in Midtown Sacramento.

    As for why San Joaquins couldn’t expand operations on their already-occupied Fresno Subdivision and has to build these new stations just to head north, it is because the Fresno Subdivision is at max capacity, and because turning north would require a brief traversal west onto the Martinez Subdivision, until turning north at Haggin Junction. This is too much impact for UP to accept, in addition to wholly bypassing the communities between Lodi and Sacramento which don’t yet have passenger rail service, even though they see freight trains on the Sacramento Subdivision.





  • To make sure we’re all on the same page, this proposal involves creating an account with a service provider, then uploading some sort of preexisting, established proof-of-identity (eg passport data page), and then requesting a token against that account. The token is timestamped and non-fungible, so that when the token is presented to an age-restricted website, that website can query the service provider to verify that: 1) the token is still valid, 2) the person associated with the token is at least a certain age.

    If I understood that correctly, what you’re describing is an account service combined with an identity service, which could achieve the objectives of a proof-of-age service, but does not minimize privacy complications. And we already have account services of varying degrees and complexity: Google Accounts, OAuth, etc. Basically any service where you log-in, since the point of logging in is to associate to a account, although one person can have multiple accounts. Passing around tokens isn’t strictly necessary since you can just ask the user to prove account ownership by signing into their Google Account, for example. An account service need not necessarily verify age, eg signing in to post a comment on a news article.

    Compare this with an identity service like ID.me, which provide records on an individual; there cannot be multiple records for the same live person. This type of service is distinct from an account service, but some accounts are necessarily tied to a single identity, such as online banking. But apart from KYC regulations or filing one’s taxes online, an identity service isn’t required for most day to day activities, and any additional uses pose identify theft concerns.

    Proof-of-age – as I understand it from the Australian legislation – does not necessarily demand an identity service be used to satisfy the law, but the question in this Lemmy thread is whether that’s a distinction without a difference. We don’t want to be checking identities if we don’t have to, for privacy and identity theft reasons.

    In short, can a person be uniquely, anonymously age-verified online? I suspect not. Your proposal might be reasonable for an identity service, but does not move us further towards a theoretical privacy-centric proof-of-age validation mechanism. If such a mechanism doesn’t exist, then the Australian legislation would be mandating identity checks for subject websites, which then become targets for the holder of those identity records. This would be bad.


  • Sadly, this type of scheme suffers from: 1) repudiation, and 2) transferability. An ideal system would be non-repudiable, meaning that when a GUID is used, it is unmistakably an action that could only be undertaken by the age-verified person. But a GUID cannot guarantee that, since it’s easy enough for an adult to start selling their valid GUIDs online to the highest bidder en-masse. And being a simple string, it can easily and confidentially be transferred to the buyer, so that no one but those two would know that the transaction actually took place, or which GUID was passed along.

    As a general rule, when complex questions arise which might possibly be solved by encryption, it’s fairly safe to assume that expert cryptographers have already looked at the problem and that no easy or obvious solution exists. That’s not to say that cryptographers must never be questioned, but that the field is complicated enough that incomplete answers abound.

    IMO, the other comments have it right: there does not exist a general solution to validate age without also compromising anonymity or revealing one’s identity to someone. And that alone is already a privacy compromise.


  • I’m on mobile so I can’t compile this myself, but can you clarify on what you’re observing? Does “nothing” mean no output to stdout and stderr? Or that you did get an error message but it’s not dispositive as to what libcurl was doing? Presumably the next step would be to validate that the program is executing at all, either with a debugger or printf-style debug statements at all junctures.

    Please include as much detail as you can, since this is now more akin to a bug report.

    EDIT: wait a sec. What exactly is this example code meant to do? The Pastebin API call suggests that this is meant to upload a payload to the web, not pull it down. But CURLOPT_WRITEFUNCTION is for receiving data from a URI. What is your intention with running this example program?


  • Unless I’m mistaken, that first example as-written will fetch POST the network resource and then immediately clean up. The fact that CURLOPT_NOPROGRESS is passed means that the typical progress bar for curl in an interactive shell will be suppressed. The comment in the code even says that to make the example do something useful, you’ll have to pass callback pointers, possibly by way of CURLOPT_WRITEFUNCTION or CURLOPT_WRITEDATA.

    From the curl_easy_perform() man page:

    A network transfer moves data to a peer or from a peer. An application tells libcurl how to receive data by setting the CURLOPT_WRITEFUNCTION and CURLOPT_WRITEDATA options. To tell libcurl what data to send, there are a few more alternatives but two common ones are CURLOPT_READFUNCTION and CURLOPT_POSTFIELDS.



  • I was once working on an embedded system which did not have segmented/paged memory and had to debug an issue where memory corruption preceded an uncommanded reboot. The root cause was a for-loop gone amok, intending to loop through a linked list for ever member of an array of somewhat-large structs. The terminating condition was faulty, so this loop would write a garbage byte or two, ever few hundred bytes in memory, right off the end of the 32 bit memory boundary, wrapping around to the start of memory.

    But because the loop only overwrote a few bytes and then overflew large swaths of memory, the loop would continue passing through the entire address space over and over. But since the struct size wasn’t power-of-two aligned, eventually the garbage bytes would write over the crucial reset vector, which would finally reboot the system and end the misery.

    Because the system wouldn’t be fatally wounded immediately, the memory corruption was observable on the system until it went down, limited only by the CPU’s memory bandwidth. That made it truly bizarre to diagnose, as the corruption wasn’t in any one feature and changed every time.

    Fun times lol