If you’re genuinely worried about this, you shouldn’t be using untrusted machines for remote access.
If you’re genuinely worried about this, you shouldn’t be using untrusted machines for remote access.
Apache Guacamole might be a good option. “Clientless” (browser-based), supports various mfa, uses ssh/vnc/rdp on the backend.
However, if the data on that machine is sensitive, or if that machine has access to other sensitive things on your network, I’d suggest caution in allowing remote access from untrusted machines on the wider internet.
powertop is a cool tool that can analyze your machine and provide a list of suggested power optimizations
DNS is what you’re looking for. To keep it simple and in one place (your adguard instance), you can add local dns entries under Filters > DNS Rewrites in the format below:
192.xxx.x.47 plex.yourdomain.xyz
192.xxx.x.53 snapdrop.yourdomain.xyz
What is your root filesystem installed on - lvm, zfs, or bare disk partitions? Are you booting with grub (legacy/bios) or systemd-boot (uefi)?
Can’t beat an X230 with an i5 for that use case, and you can still find them for around 100 bucks. Swap in an X220 keyboard, maybe a new battery, coreboot it, and in my opinion you’ve got the perfect laptop. I’ve daily driven that setup for the last 5 years and it’s been great.
Without an argument, the -j option will start jobs with no limits - depending on the project, this could be thousands or tens of thousands of processes at once. The kernel will do its best to manage this, but when your interface is competing for cpu resources with 10,000 other cpu-intensive processes, it will appear frozen.
Make’s -j option specifies the number of concurrent jobs to run, and without an argument doesn’t limit that number at all. Usually you pass an argument to it with the number of cpu cores you want to utilize. Going over the number of cores you have available (like it does without an argument) will be slower or even freeze your system with all the context switching it has to do.
Any proclaimed prioritization of privacy or privacy improvements in stock Android serve only to bring your data more directly under the control of Google at the expense of other entities, so that those other entities must pay Google as a middleman to your data. On stock Android, there is no privacy - Google has access to everything, always.
In my opinion, one step that could reasonably be taken to improve the situation is for Google to go fuck itself, lose every anti-trust suit brought against it, and die.
In the bridge-vids option, you can replace 2-4094 with a space-separated list of vlans to be allowed tagged ingress/egress on the bridge interface, if you prefer to limit it. Nothing in the GUI for that, as far as I know.
ssh predates the specification, exists somewhat independently of even the idea of a desktop (not common to see xdg env variables like XDG_CONFIG in a headless environment, for example), and uses the homedir/.ssh directory on both the client and server side of a connection. I think it’s less to do with security and more to do with uniformity for something as important as ssh - ssh doesn’t need to change to use the xdg spec, and xdg doesn’t need to allot anything special for ssh when it’s already uniform across the unix spectrum
If you can find a cheap used micro-form-factor pc with hdmi output (eg thinkcentre m93p), that’s a great sustainable way to go. Stick debian on it, get a cheap tiny bluetooth keyboard/trackpad, stream via web browser. Bonus if it’s got a dvd player, for the ultimate utilitarian foss htpc.
If you can’t get a packaged apk directly from the developer/publisher, or from a trusted repository like the play store or fdroid, I wouldn’t resort to third party sources like these. If you can’t compare the signing signature of an apk from an untrusted source to that from a trusted source, you can’t be certain that what you’re installing hasn’t been tampered with.
For donating compute/storage/bandwidth to community archiving, this is a great place to check out: https://wiki.archiveteam.org/
Just my opinion, but I think they’re a great project to support.
I’d recommend a full battery calibration before running the command one more time, if you haven’t already (charge the battery fully, leave it on the charger at 100% for a while, then fully discharge until it shuts itself off, leave it for a bit, then fully recharge while off). If the calibrated values line up with a full:design ratio of ~80%, especially with a 10-year-old battery with almost 700 cycles on it, my take is that’s pretty great.
That said, I think the best way to get an accurate feel for the health of an old battery is to put it through one full cycle of normal use and time how long it takes to die.