• 0 Posts
  • 101 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle
  • Agreed. When I was fresh out of university, my first job had me debugging embedded firmware for a device which had both a PowerPC processor as well as an ARM coprocessor. I remember many evenings staring at disassembled instructions in objdump, as well as getting good at endian conversions. This PPC processor was in big-endian and the ARM was little-endian, which is typical for those processor families. We did briefly consider synthesizing one of them to match the other’s endianness, but this was deemed to be even more confusing haha



  • There was a ton of hairbrained theories floating around, but nobody had any definitive explanation.

    Well I was new to the company and fresh out of college, so I was tasked with figuring this one out.

    This checks out lol

    Knowing very little about USB audio processing, but having cut my teeth in college on 8-bit 8051 processors, I knew what kind of functions tended to be slow.

    I often wonder if this deep level understanding of embedded software/firmware design is still the norm in university instruction. My suspicion has been that focus moved to making use of ever-increasing SoC performance and capabilities, in the pursuit of making it Just Work™ but also proving Wirth’s Law in the process via badly optimized code.

    This was an excellent read, btw.



  • Your primary issue is going to be the power draw. If your electricity supplier has cheap rates, or if you have an abundance of solar power, then it could maybe find life as some sort of traffic analyzer or honeypot.

    But I think even finding a PCI NIC nowadays will be rather difficult. And that CPU probably doesn’t have any sort of virtualization extensions to make it competitive against, say, a Raspberry Pi 5.




  • 1 - I get that light is flashed in binary to code chips but how does it actually fookin work ? What is the machine emmiting [sic] this light made up of ?

    This video by Branch Education (on YouTube or Nebula) is a high level explanation of every step in a semiconductor fab. It doesn’t go over the details of how semiconductor junctions work, though. That sort of device physics is discussed in this YouTube video by Ben Eater, “how semiconductors work”

    2 - How was program’s, OSs, Kernal [sic] etc loaded on CPU in early days when there were no additional computers to feed it those like today ?

    When the CPU powers up, typically the very first thing it starts to execute is the bootloader. Bootloaders will vary depending on the system, and today’s modern Intel or AMD desktop machines boot very differently to their 1980s predecessor. However, since the IBM PC laid the foundation for how most computers booted up for a nearly four decades, it may be instructive to see how it worked in the 80s. This WikiBook on x86 bootloading should be valid for all 32-bit x86 targets, from the original 8086 to the i686. It may even be valid further, but UEFI started to take off, which changed everything into a more modern form.

    But even before the 80s, computers could have a program/kernel/whatever loaded using magnetic tape, punch cards, or even by hand with physical switches, each representing one bit.

    But how does the computer decode this binary “machine code” into instructions to perform? See this video by Ben Eater, explaining machine instructions for the MOS 6502 CPU (circa 1975). The age of the CPU is not important, but rather that by the 70s, the basics of CPU operations has already been laid down, and that CPU is easy to explain yet non-trivial.

    3 - I get internet is light storing information but how ? Fookin HOW ?

    The mechanics of light bouncing inside a fibre optic cable is well-explained in this YouTube video by engineerguy. But for an explanation of how ones-and-zeros get converted into light to be transmitted, that’s a bit more involved. I might just point you to the Wikipedia page for fibre optic communications.

    How the data is encoded is important, as this has significant impact on bandwidth and data integrity, not just for light but for wireless RF transmission and wireline transmission. For wireless, this Branch Education video on Starlink (YouTube or Nebula) is instructive. And for wired, this Computerphile YouTube video on ADSL covers the challenges faced.

    Quite frankly, I might just recommend the entirety of the Computerphile channel, particularly their back catalogue when they laid down computer fundamentals.

    4 - How did it all come to be like it is today and ist it possible for one human to even learn how it all works or are we just limited one or two things ? Like cab we only know how to program or how to make hardware but not both or all ?

    As of 2024, the field is enormous, to the point that a CompSci degree necessarily has to be focused on a specific concentration. But that doesn’t necessarily mean the hard stuff like device physics are off-limits, leaving just stuff like software and AI. Sam Zeloof has been making homemade microchips, devising his own semiconductor process and posting it on YouTube..

    Specifically to your question about either software or hardware, the specialty of embedded software engineering requires skills with low-level software or firmware, as well as dealing with substantial hardware-specific details. People that write drivers or libraries for new hardware require skills from both regimes, being the bridge between Electrical Engineers that design the hardware, and software developers that utilize the hardware.

    Likewise, developers for high performance computers need to know the hardware inside-out, to have any chance of extracting every last bit (pun intended) of speed. However, these developers tend to rely upon documentation such as data sheets, rather than having to be keenly aware of how the hardware was manufactured. Some level of logical abstraction is necessary to tractably understand today’s necessarily large and complex systems.

    5 - Do we have to join Intel first or something to learn how most of the things work lol ?

    Nope! Often, you can look to existing references, such as Linux source code, to provide a peek at what complexities exist in today’s machines. I say that, but the Linux kernel is truly a monster, not because it’s badly written, but because they willingly take code to support every single bleeding platform that people are willing to author code for. And that means lots and lots of edge cases; there’s no such thing as a “standard” computer. X86 might be the closest to a “standard” but Intel has never quite been consistent across that architecture’s existence. And ARM and RISC-V are on the rise, in any case.

    Perhaps what’s most important is to develop strong foundations to build on. Have a cursory understanding of computing, networking, storage, wireless, software licenses, encryption, video encoding/decoding, UI/UX, graphics, services, containers, data and statistical analysis, and data exchange formats. But then pick one and focus on it, seeing how it interacts with other parts of the computing world.

    Growing up, I had an interest in IT and computer maintenance. Then it evolved into writing websites. Then into writing C++ software. Right before university, I started playing around with the Arduino’s Atmel 328p CPU directly, and so I entered uni as a Computer Engineer, hoping to do both software and hardware.

    The space is huge, so start somewhere that interests you. From the examples above, I think online videos are a fantastic resource, but so can blog posts written by engineers at major companies, as can talks at conferences, as can sitting in at university courses.

    Good luck and good studies!


  • To lay some foundation, a VLAN is akin to a separate network with separate Ethernet cables. That provides isolation between machines on different VLANs, but it also means each VLAN must be provisioned with routing, so as to reach destinations outside the VLAN.

    Routers like OpenWRT often treat VLANs as if they were distinct NICs, so you can specify routing rules such that traffic to/from a VLAN can only be routed to WAN and nowhere else.

    At a minimum, for an isolated VLAN that requires internet access, you would have to

    • define an IP subnet for your VLAN (ie /24 for IPv4 and /64 for IPv6)
    • advertise that subnet (DHCP for IPv4 and SLAAC for IPv6)
    • route the subnets to your WAN (NAT for IPv4; ideally no NAT66 for IPv6)
    • and finally enable firewalling

    As a reminder, NAT and NAT66 are not firewalls.




  • Re: 2.5 Gbps PCIe card

    In some ways, I kinda despise the 802.3bz specification for 2.5 and 5 Gbps on twisted pair. It came into existence after 10 Gbps twisted-pair was standardized, and IMO exists only as a reaction to the stubbornly high price of 10 Gbps ports and the lack of adoption – 1000 Mbps has been a mainstay and is often more than sufficient.

    802.3bz is only defined for twisted pair and not fibre. So there aren’t too many xcvrs that support it, and even fewer SFP+ ports will accept such xcvrs. As a result, the cheap route of buying an SFP+ card and a compatible xcvr is essentially off-the-table.

    The only 802.3bz compatible PCIe card I’ve ever personally used is an Aquantia AQN-107 that I bought on sale in 2017. It has excellent support in Linux, and did do 10 Gbps line rate by my testing.

    That said, I can’t imagine that cards that do only 2.5 Gbps would somehow be less performant. 2.5 Gbps hardware is finding its way into gaming motherboards, so I would think the chips are mature enough that you can just buy any NIC and expect it to work, just like buying a 1000 Mbps NIC.

    BTW, some of these 802.3bz NICs will eschew 10/100 Mbps support, because of the complexity of retaining that backwards compatibility. This is almost inconsequential in 2024, but I thought I’d mention it.


  • I’ve only looked briefly into APC/UPC adapters, although my intention was to do the opposite of your scenario. In my case, I already had LC/UPC terminated duplex fibre through the house, and I want to use it to move my ISP’s ONT closer to my networking closet. That requires me to convert the ISP’s SC/APC to LC/UPC at the current terminus, then convert it back in my wiring closet. I hadn’t gotten past the planning stage for that move, though.

    Although your ISP was kind enough to run this fibre for you, the price of 30 meters LC/UPC terminated fibre isn’t terribly excessive (at least here in USA), so would it be possible to use their fibre as a pull-string to run new fibre instead? That would avoid all the adapters, although you’d have to be handy and careful with the pull forces allowed on a fibre.

    But I digress. On the xcvr choice, I don’t have any recommendations, as I’m on mobile. But one avenue is to look at a reputable switch manufacturer and find their xcvr list. The big manufacturers (Cisco, HPE/Aruba, etc) will have detailed spec sheets, so you can find the branded one that works for you. And then you can cross-reference that to cheaper, generic, compatible xcvrs.


  • In my first draft of an answer, I thought about mentioning GPON but then forgot. But now that you mention it, can you describe if the fibres they installed are terminated individually, or are paired up?

    GPON uses just a single fibre for an entire neighborhood, whereas connectivity between servers uses two fibres, which are paired together as a single cable. The exception is for “bidirectional” xcvrs, which like GPON use just one fibre, but these are more of a stopgap than something voluntarily chosen.

    Fortunately, two separate fibres can be paired together to operate as if they were part of the same cable; this is exactly why the LC and SC connectors come in a duplex (aka side-by-side) format.

    But if the ISP does GPON, they may have terminated your internal fibre run using SC, which is very common in that industry. But there’s a thing with GPON specifically, where the industry has moved to polishing the fiber connector ends with an angle, known as Angled Physical Contact (APC) and marked with green connectors, versus the older Ultra Physical Contact (UPC) that has no angle. The benefit of APC is to reduce losses in the ISP’s fibre plant, which helps improve services.

    Whereas in data center and networking, I have never seen anything but UPC, and that’s what xcvrs will expect, with tiny exceptions or if they’re GPON xcvrs.

    So I need to correct my previous statement: to be fully functional as designed, the fiber and xcvr must match all of: wavelength, mode, connector, and the connector’s polish.

    The good news is that this should mostly be moot for your 30 meter run, since the extra losses from mismatched polish should still link up.

    As for that xcvr, please note that it’s an LRM, or Long Range Multimode xcvr. Would it probably work at 30 meters? Probably. But an LR xcvr that is single mode 1310 nm would be ideal.


  • Regarding future proofing, I would say that anyone laying single pairs of fibres is already going to constrain themselves when looking to the future. Take 100 Gbps xcvrs as an example: some use just the single pair (2 fibres total) to do 100 Gbps, but others use four pairs (8 fibres total) driving each at just 25 Gbps.

    The latter are invariably cheaper to build, because 25 Gbps has been around for a while now; they’re just shoving four optical paths into one xcvr module. But 100 Gbps on a single fiber pair? That’s going to need something like DWDM which is both expensive and runs into fibre bandwidth limitations, since a single mode fibre is only single-mode for a given wavelength range.

    So unless the single pair of fibre is the highest class that money can buy, cost and technical considerations may still make multiple multimode fibre cables a justifiable future-looking option. Multiplying fibres in a cable is likely to remain cheaper than advancing the state of laser optics in severely constrained form factors.

    Naturally, a multiple single-mode cable would be even more future proofed, but at that point, just install conduit and be forever-proofed.


  • Starting with brass tacks, the way I’m reading the background info, your ISP was running fibre to your property, and while they were there, you asked them to run an additional, customer-owned fibre segment from your router (where the ISP’s fibre has landed) to your server further inside the property. Both the ISP segment and this interior segment of fibre are identical single-mode fibres. The interior fibre segment is 30 meters.

    Do I have that right? If so, my advice would be to identify the wavelength of that fibre, which can be found printed on the outer jacket. Do not rely on just the color of the jacket, and do not rely on whatever connector is terminating the fibre. The printed label is the final authority.

    With the fibre’s wavelength, you can then search online for transceivers (xcvrs) that match that wavelength and the connector type. Common connectors in a data center include LC duplex (very common), SC duplex (older), and MPO (newer). 1310 and 1550 nm are common single mode wavelengths, and 850 and 1300 nm are common multimode wavelengths. But other numbers are used; again, do not rely solely on jacket color. Any connector can terminate any mode of fibre, so you can’t draw any conclusions there.

    For the xcvr to operate reliably and within its design specs, you must match the mode, wavelength, and connector (and its polish). However, in a homelab, you can sometimes still establish link with mismatching fibres, but YMMV. And that practice would be totally unacceptable in a commercial or professional environment.

    Ultimately, it boils down to link losses, which are high if there’s a mismatch. But for really short distances, the xcvrs may still have enough power budget to make it work. Still, this is not using the device as intended, so you can’t blame them if it one day stops working. As an aside, some xcvrs prescribe a minimum fibre distance, to prevent blowing up the receiver on the other end. But this really only shows up on extended distance, single mode xcvrs, on the order of 40 km or more.

    Finally, multimode is not dead. Sure, many people believe it should be deprecated for greenfield applications. I agree. But I have also purchased multimode fibre for my homelab, precisely because I have an obscene number of SFP+ multimode, LC transceivers. The equivalent single mode xcvrs would cost more than $free so I just don’t. Even better, these older xcvrs that I have are all genuine name-brand, pulled from actual service. Trying to debug fibre issues is a pain, so having a known quantity is a relief, even if it means my fibre is “outdated” but serviceable.


  • I think this can be more generalized as: why do some people eschew anonymity online? And a few plausible reasons come to mind:

    • a convention carried over from the pre-Internet days to be honest and frank as one would be in-person
    • having no prior experience with anonymity or a basis to expect anonymity to last
    • they’re already a real-life edgelord and so the in-person/online distinction is artificial, or have an IDGAF attitude to such distinctions

    IMO, older people tend to have the first reason, having grown up with the Internet as a communication tool. Younger, post-2000 people might have the second reason, because from the events during their lifetime, privacy has eroded to the point it’s almost mythical. Or that it’s like the landed gentry, that you have to be highly privileged to afford to maintain anonymity.

    I have no thoughts as to the prevalence of the third reason, but I’m reminded of a post I saw on Mastodon months ago, which went something like this: every village used to have the village idiot, but was mostly benign because everyone in town knew he was an idiot. One moron in every 5 or 10 thousand people is fine. But with the Internet, all the village idiots can network with each other, expanding their personal communities and hyping themselves up to do things they otherwise wouldn’t have found support for.

    Coming back to the question, in the context above, maybe online anonymity is a learned practice, meaning it has to be taught and isn’t plainly natural. Nothing quite like the Internet has ever existed in human history, so what’s “natural” may just not have caught up yet. That internet literacy and safety is a topic requiring instruction bolsters this thought.


  • just reuse old equipment you have around

    Fully agree. Sometimes the best equipment is that which is in-hand and thus free.

    you can just send vlan tagged traffic across a dumb switch no problem

    A small word of caution: some cheap unmanaged switches rigidly enforce 1500 Byte payload sizes, and if the switch has no clue that 802.1q VLAN tags even exist, will consider the extra 4 bytes as part of the payload. So your workable MTU for tagged traffic could now be 1496 Bytes.

    Most traffic will likely traverse that switch just fine, but max-sized 1500 Byte payload frames with a VLAN tag may be dropped or cause checksum errors. Large file transfers tend to use the full MTU, so be aware of this if you see strange issues specific to tagged traffic.


  • Since you mentioned that your ONT is 2.5 Gbps, I am assuming that you need a twisted-pair NIC. I don’t have a recommendation for a NIC exactly for 2.5 Gbps, but since you’re specifically looking for low operating temperature, you may want to avoid 10 Gbps twisted-pair NICs.

    10GBaseT – sometimes called 10G copper, but 10Gbps DACs also use copper – operates very hot, whether in an SFP+ module or as a NIC. The latter is observable just by looking at the relatively large heat sinks needed for some cards. This is an inevitable result of trying to push 800 MSymbols/sec over pairs of copper wires, and it’s lucky to exceed 55 meters on CAT6. It’s impressive how far copper wire has come, but the end is nigh.

    Now, it could be that when a 10 Gbps NIC is only linked at 2.5 Gbps, it could drop into a lower power state. But my experience with the 10/100/1000 baseT specs suggest that the PHY on a 10 Gbps NIC will just repeat the signals four times, to produce the same transmission of the quarter-as-fast 2.5 Gbps spec. So possibly no heat savings there.

    A dedicated 2.5 Gbps card would likely operate cooler and is more likely to be available as a single port, which would fit in your available PCIe ports. Whereas 802.3bz 2.5/5/10 Gbps NICs tend to be dual-port.

    A final note: you might find “2.5 Gbps RJ45 SFP+” modules online. But I’m not aware of a formal 802.3 spec that defines the 2.5/5 Gbps speeds for modular connectors, so these modules probably won’t work with SFP+ NICs.