• 0 Posts
  • 22 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • Many, many years ago I used to have two Wyse50 terminals, running split screens each with two parts. I did a lot of support on remote systems (via modem!) and I would have a session on a customer system, source code and running on our test system and internal stuff. I didn’t have space for a third terminal.

    At another job I had an office with a “U” shaped desk. I would spread printouts across half the “U” and swivel around between the computer and the printouts.







  • Keep in mind that it has been decades since I last used Kermit, but I’m pretty sure the use case it was originally designed for was…

    Connect to a serial port, which had a modem attached. Talk to the modem and get it to dial a number. Presumably, the remote end answered and the port attached to its modem would issue a login prompt. Negotiate the login and then issue a bunch of commands to change directories and then launch Kermit on the remote system. After that Kermit to Kermit communications took over until you terminated the session. Finally, log off the remote system and hang up the modem.

    All of this stuff could be done via scripts. I seem to remember that it would actually wait for a response, and then parse the response in the script. I don’t remember ever doing polling loops.

    If you’re on a *nix box of some type, it’s totally possible to open up a serial port for manual I/O even in something like a bash script. Even if you have to reverse telnet to a terminal server.







  • Well, there are specific hardware configurations that are designed to be servers. They probably don’t have graphics cards but do have multiple CPUs, and are often configured to run many active processes at the same time.

    But for the most part, “server” is more related to the OS configuration. No GUI, strip out all the software you don’t need, like browsers, and leave just the software you need to do the job that the server is going to do.

    As to updates, this also becomes much simpler since you don’t have a lot of the crap that has vulnerabilities. I helped manage comuter department with about 30 servers, many of which were running Windows (gag!). One of the jobs was to go through the huge list of Microsoft patches every few months. The vast majority of which, “require a user to browse to a certain website” in order to activate. Since we simply didn’t have anyone using browsers on them, we could ignore those patches until we did a big “catch up” patch once a year or so.

    Our Unix servers, HP-UX or AIX, simply didn’t have the same kind of patches coming out. Some of them ran for years without a reboot.




  • I’m not sure that what developers really, really need is faster programming cycles. Most teams could benefit more by controlling the process - from idea to deployed. How much technical debt is incurred because users/customers can’t prioritize features or give accurate requirements, there’s way too much WIP, features are huge, releases are huge and infrequent and the feedback cycles are far too long.

    So yeah, as programmers it’s always cool to look at ways to program faster, but what’s the point in programming stuff nobody needs faster? Or programming the wrong things faster?

    I’d be willing to be that if you asked any team, “What are the biggest impediments to delivering value to your users faster?”, the answer would be that you can’t cut code fast enough.