• 0 Posts
  • 24 Comments
Joined 6 months ago
cake
Cake day: December 27th, 2023

help-circle
  • i once had to look at a firefall appliance cluster, (discovered, it could not do any failover in its current state but somehow the decider was ok with that) but when looking at its logs, i discovered an rsh and rcp access from an ip address that belonged to a military organisation from a different continent. i had to make it a security incident. later the vendor said that this was only the cluster internal routing (over the dedicated crosslink), used for synchronisation (the thing that did not work) and was only used by a separate routing table only for clustersync and that could never be used for real traffic. but why not simply use an ip that you “own” by yourself and PTR it with a hint about what this ip is used for? instead of customers scratching their head why military still uses rcp and rsh. i guess because no company reads firewall logs anyway XD

    someone elses ip? yes! becuase they’ll never find out !!1!

    i really appreciate that ipv6 has things like a dedicated documentation address range and that fc00:/7 is nicely short.


  • ipv6 in companies… ipv6 is not hard, but for internal networking no company (really) “needs” more than rfc1918 address space. thus any decision in that direction is always “less” needed than any bonus for (da)magement personnel is crucial for the whole companies survival…

    for companies services to be reachable from outside/ipv6 mostly “only” the loadbalancers/revproxies etc need to be ipv6 ready but … this i.e. also produces logs that possibly break decades old regexes that no one understands any more (as the good engineers left due to too many boni payed to damagement personnel) while other access/deny rules that could break or worse let through where they should block (remember that 192.168. could the local part of ipv6 IF sone genious used a matching mech that treats the dot “.” as a wildcard as overpayed damagement personnel made them rush too fast), could be hidden “somewhere”. altogether technical debt is a huge blocker for everything, especially company growth, and if no customer “demands” ipv6, then it stays on the damagement personnels list as “fulfilling the whishes of engineers to keep them happy” instead of on the always deleted “cleaning up technical debt caused by damagement personnel” list.

    setting up firewalls for ipv6 is quite easy and if you go the finegrained “whitelisted or drop/block” approach from the beginning it might take a bit for ipv6 specials to be known to you, but the much bigger thing is IMHO the then current state of firewall rules. and who knows every existing rule? what rules should be removed already and must not be ported to ipv6? usually firewalls and their rules are a big mess due to … again too many boni payed to damagement personnel, hindering the company from the needed steps forward…

    ipv6 adoption is slow for reasons that are driving huge cars that in turn speed up other problems ;-|


  • maybe start with an adjustable setup:

    • rent a cheap vm, i got one for 1€/month (for the first year,cancel monthly) from ovh currently
    • setup 3 openvpn instances to redirect all routes through the tunnel, one with ipv4 only, one with ipv6 only and one with both
    • setup the client on your mobile phone and your laptop both with all three vpns to choose from
    • have the option to choose now and try out ipv6, standalone or dualstack depending on what vpn you switch on
    • use this setup to blame services that don’t support ipv6 yet or maybe are broken with dualstack 🤣
    • rise from under-the-stone (disabling ipv6 only) to in-sunlight (to a well-above-industry-standart-level !!! “quick” new network technologies adopting “genious”) 🤣
    • improve your openvpn setup from above to be reachable “by” ipv6 too if you haven’t done it from the beginning, done: reach the pro-level of the-late-adopter-noob-group

    (if you want, ask for config snippets)

    btw i prefer to wait for ipv8😁 before “demanding” ipv6 from services i use 🤣


  • smb@lemmy.mltoProgrammer Humor@programming.dev"prompt engineering"
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    3 months ago

    that a moderately clever human can talk them into doing pretty much anything.

    besides that LLMs are good enough to let moderately clever humans believe that they actually got an answer that was more than guessing and probabilities based on millions of trolls messages, advertising lies, fantasy books, scammer webpages, fake news, astroturfing, propaganda of the past centuries including the current made up narratives and a quite long prompt invisible to that human.

    cheerio!




  • after looking at the ticket myself i think the relevant things IMHO are:

    • a person filed a bug report due to not seeing what changes in the new version caused a different behaviour
    • that person seemed pushy, first telling the dev where patches should be sent to (is this normal? i guess not, better let the dev decide where patches go or -in this case- if patches are needed at all), then coming up with ceo style wordings (highly visible, customer experience of untested but nevertheless released to live product is bad due to this (implicitly “your”) bug)
    • pushiness is counterparted by “please help”
    • free-of-charge consulting was given by the one pointing to changes likely beeing visible in changelog (i did not look though) but nevertheless it was pointed out to the parameter which assumes RTFM (if docs were indeed updated) that a default value had changed and its behavior could be adjusted by using that given parameter.

    up to there that person -belonging to M$ or not (don’t know and don’t care) - behaved IMHO rather correctly, submitting a bug report for something that looked like it, beeing a bit pushy, wanting priority, trying to command, but still formally at least “asking” for help. but at that point the “bug” seemed to have been resolved to me, it looks like the person was either not reading the manual and changelog, or maybe manual or changelog lacks that information, but that was not stated later so i guess that person just did not read neither changelog nor manual.

    instead - so it seems to me - that person demanded immediate and free-of-charge consulting of how exactly the switch should be used to work in that specific use case which would imply the dev looks into the example files, maybe try and error for himself just so that that person does not need to neither invest the time to learn use the software the company depends on, nor hire a consultant to do the work.

    i think (intentional or not) abusing a bug tracker for demanding free-of-charge enduser consulting by a dev is a bad idea unless one wants(!) to actively waste the precious time of the dev (that high priority ticket for the highly visible already live released product relies on) or has even worse intentions like:

    • uploading example files with exploits in them, pointing to the exact versions that include the RCE vulnerability that sample file would abuse and the “bug” was just reported cause it fits the version needed for exploitation and pressure was made by naming big companies to maybe make the dev run a vulnerable version on it on his workstation before someone finds out, so that an upstream attack could take place directly on the devs workstation. but thats just creating a fictive worst case scenario.

    to me this clearly looks like a “different culture” problem. in companies where all are paid from basically the same employer, abusing an internal bug tracker for quick internal consulting would probably be seen as just normal and best practice because the dev who knows and is actually working on the code is likely to have the solution right at hand without thinking much while the other person, who is in charge of quick fixing an untested but already live to customers released product, does not have sufficient knowledge of how the thing works and neither is given the time to learn or at least read changelogs and manual nor the time to learn the basics of general upstream software culture.

    in companies the https://en.m.wikipedia.org/wiki/Peter_principle could be a problem that imho likely leads to such situations, but this is a guess as i know nobody working there and i am not convinced that that person is in fact working for the named company, instead in that ticket shows up a name that i would assume to be a reason to not rely too much about names in the tickes system always be realnames.

    the behaviour that causes the bad postings here in this lemmy thread is to me likely “just” a culture problem and that person would be advised well if told to learn to know the open source culture, netiquette etc and learn to behave differently depending on to who, where and how they communicate with, what to expect and how to interact productively to the benefit of their upstream too, which is the “real price” all so often in open source. it could be that in the company that rolled out the untested product it is seen to be best practice to immediately grab the dev who knows a software and let him help you with whatever you can’t on your own (for whatever reason) whenever you manage to encounter one =]

    i assume the pushyness could likely come from their hierarchy. it is not uncommon that so called leaders just create pressure to below because they maybe have no clue of the thing and not want to gain that clue, but that i cannot know, its just a picture in my head. but in a company that seems to put pressure on releasing an untested product to customers i guess i am not too wrong with the direction of that assumption. what the company maybe should learn is that releasing untested and/or unfinished products to live is a bad habit. but i also assume that if they wanted to learn that, they maybe would have started to learn it like roundabout 2 decades ago. again, i do not know for what company that person works -or worked- for, could be just a subcontractor of the named one too. and also could be that the pushyness (telling its for m$, that its live, has impact to customers etc) was really decided by someone up the latter who would have literally no experience at all on how to handle upstream in such situations. hierarchies can be very dysfunctional sometimes and in companies saying “impact to customers” sometimes is likely the same as saying “boss says asap”.

    what i would suggest their customers (those who were given a beta version as production ready) should learn is that when someone (maybe) continously delivers differently than advertised, that after some few times of experiencing this, the customer would be insane when assuming that that bad behaviour would vanish by pure hope + throwing money into hands where money maybe already didn’t help improving their habits for assumingly decades. And when feeding everhungry with money does not resolve the problems, that maybe looking towards those who do have a non-money-dependant grown-up culture could actually provide more really usable products. Evaluation of new solutions (which one would really be best for a specific usecase i.e.) or testing new versions before really rolling them out to live might be costly especially when done throughout, but can provide a lot of really high valueable stability otherwise unreachable by those who only throw money at shareholders of brands and maybe rely on pure hope for all of the rest. Especially when that brand maybe even officially anounced to remove their testing department ;+) what should a sane and educated customer expect then ? but again to note, i do not know which companies really are involved and how exactly. from the ticket i do not see which company that person directly works for, nor if the claim that m$ is involved is a fact or just a false claim in hope for quicker help (companies already too desperate to test products before live could be desperate again in need for even more help when their bad habits piled up too long and begin falling on their heads)


  • the xz vulnerability was done through a superflous dependency to systemd, xz was only the library that was abused to use systemd’s superflous dependency hell. sshd does not use xz, but systemd does depend on it. sshd does not need systemd, but it was attacked through its library dependency.

    we should remove any pointless dependencies that can be found on a system to prevent such attacks in future by reducing dependency based attack vectors to a minimum.

    also we should increase the overall level of privilege separation where systemd is a good bad example, just look at the init binary and its capability zoo.

    The company who hired “the” systemd developer should IMHO start to really fix these issues !

    so please hold your “$they have fixed it” back until the the root cause that made the xz dependency level attack possible in the first place has been really fixed =)

    Of course pointing it out was good, but now the root cause should be fixed, not just a random symptom that happened to be the first visible atrack that used this attack vector introduced by systemd.


  • looking at the official timeline it is not completely a microsoft product, but…

    1. microsoft hated all of linux/open source for ages, even publicly called it a cancer etc.
    2. microsoft suddenly stopped it’s hatespeech after the long-term “ineffectivenes” (as in not destroying) of its actions against the open source world became obvious by time
    3. systemd appeared on stage
    4. everything within systemd is microsoft style, journald is literally microsoft logging, how services are “managed” started etc is exactly the flawed microsoft service management, how systemd was pushed to distributions is similar to how microsoft pushes things to its victi… eh… “custumers”, systemd breaks its promises like microsoft does (i.e. it has never been a drop-in-replacement, like microsoft claimed its OS to be secure while making actual use of separation of users from admins i.e. by filesystem permissions first “really” in 2007 with the need of an extra click, where unix already used permissions for such protection in 1973), systemd causes chaos and removes the deterministic behaviour from linux distributions (i.e. before systemd windows was the only operating system that would show different errors at different times during installtion on the very same perfectly working hardware, now on systemd distros similar chaos can be observed too). there AFAIK still does not exist a definition of the 'binary" protocol of journald, every normal open source project would have done that official definition in the first place, systemd developers statement was like “we take care for it, just use our libraries” wich is microsoft style saying “use our products”, the superflous systems features do harm more than they help (journald’s “protection” from log flooding use like 50% cpu cycles for huge amount of wanted and normal logs while a sane logging system would be happily only using 3%cpu for the very same amount of logs/second whilst ‘not’ throwing away single log lines like journald, thus journald exhaustively and pointlessly abuses system resources for features that do more harm where they are said to help with in the first place), making the init process a network reachable service looks to me like as bad as microsoft once put its web rendering enginge (iis) into kernelspace to be a bit faster but still beeing slower than apache while adding insecurity that later was an abused attack vector. systemd adding pointless dependencies all along the way like microsoft does with its official products to put some force on its customers for whatever official reason they like best. systemd beeing pushed to distributions with a lot of force and damage even to distributions that had this type of freedom of choice to NOT force their users to use a specific init system in its very roots (and the push to place systemd inside of those distros even was pushed furzher to circumvent the unstable->testing->stable rules like microsoft does with its patches i.e.), this list is very far from complete and still no end is in sight.
    5. “the” systemd developer is finally officially hired by microsoft

    i said that systemd was a microsoft product long before its developer was then hired by microsoft in 2022. And even if he wasn’t hired by them, systemd is still a microsoft-style product in every important way with all what is wrong in how microsoft does things wrong, beginning with design flaws, added insecurities and unneeded attack vectors, added performance issues, false promises, usage bugs (like i’ve never seen an already just logged in user to be directly be logged off in a linux system, except for when systemd wants to stop-start something in background because of it’s ‘fk y’ and where one would 'just try to login again and dont think about it" like with any other of microsofts shitware), ending in insecure and instable systems where one has to “hope” that “the providers” will take care for it without continueing to add even more superflous features, attack vectors etc. as they always did until now.

    systemd is in every way i care about a microsoft product. And systemd’s attack vectors by “needless dependencies” just have been added to the list of “prooven” (not only predicted) to be as bad as any M$ product in this regard.

    I would not go as far to say that this specific attack was done by microsoft itself (how could i ?), but i consider it a possibility given the facts that they once publicly named linux/open source a “cancer” and now their “sudden” change to “support the open source world” looks to me like the poison “Gríma” used on “Théoden” as well as some other observations and interpretations. however i strongly believe that microsoft secretly actually “likes” every single damage any of systemd’s pointlessly added dependencies or other flaws could do to linux/open source very much. and why shouldn’t they like any damage that was done to any of their obvious opponents (as in money-gain and “dictatorship”-power)? it’s a us company, what would one expect?

    And if you want to argue that systemd is not “officially” a product of the microsoft company… well people also say “i googled it” when they mean “i used one of the search engines actually better than google.com” same with other things like “tempo” or “zewa” where i live. since the systemd developer works for microsoft and it seems he works on systemd as part of this work contract, and given all the microsoft style flaws within from the beginning, i consider systemd a product of microsoft. i think systemd overall also “has components” of apple products, but these are IMHO none of technical nature and thus far from beeing part of the discussion here and also apple does not produce “even more systemd” also apple has -as of my experience- very other flaws i did not encounter in systemd (yet?) thus it’s clearly not an apple product.


  • Before pointing to vulnerabilities of open source software in general, please always look into the details, who -and if so - “without any need” thus also maybe “why” introduced the actual attack vector in the first place. The strength of open source in action should not be seen as a deficit, especially not in such a context.

    To me it looks like an evilish company has put lots of efforts over many years to inject its very own overall steady attack-vector-increase by “otherwise” needless increase of indroduction of uncounted dependencies into many distros.

    such a ‘needless’ dependency is liblzma for ssh:

    https://lwn.net/ml/oss-security/20240329155126.kjjfduxw2yrlxgzm@awork3.anarazel.de/

    openssh does not directly use liblzma. However debian and several other distributions patch openssh to support systemd notification, and libsystemd does depend on lzma.

    … and that was were and how the attack then surprisingly* “happened”

    I consider the attack vector here to have been the superlfous systemd with its excessive dependency cancer. Thus result of using a Microsoft-alike product. Using M$-alike code, what would one expect to get?

    *) no surprises here, let me predict that we will see more of their attack vectors in action in the future: as an example have a look at the init process, systemd changed it into a ‘network’ reachable service. And look at all the “cute” capabilities it was designed to “need” ;-)

    however distributions free of microsoft(-ish) systemd are available for all who do not want to get the “microsoft experience” in otherwise security driven** distros

    **) like doing privilege separation instead of the exact opposite by “design”


  • i am happy to have a raspberry pi setup connected to a VLAN switch, internet is behind a modem (like bridged mode) connected with ethernet to one switchport while the raspi routes everything through one tagged physical GB switchport. the setup works fine with two raspi’s and failover without tcp disconnections during an actual failover, only few seconds delay when that happens, so basically voip calls recover after seconds, streaming is not affected, while in a game a second off might be too much already, however as such hardware failures happen rarely, i am running only one of them anyway.

    for firewall i am using shorewall, while for some special routing i also use unbound dns resolver (one can easily configure static results for any record) and haproxy with sni inspection for specific https routing for the rather specialized setup i have.

    my wifi is done by an openwrt but i only use it for having separate wifis bridged to their own vlans.

    thus this setup allows for multi-zone networks at home like a wifi for visitors with daily changing passwords and another fror chromecast or home automation, each with their own rules, hardware redundancy, special tweaking, everything that runs on gnu/linux is possible including pihole, wireguard, ddns solutions, traffic statistics, traffic shaping/QOS, traffic dumps or even SSL interception if you really want to import your own CA into your phone and see what data your phones apps (those that don’t use certificate pinning) are transfering when calling home, and much more.

    however regarding ddns it sometimes feels more safe and reliable to have a somehow reserved IP that would not change. some providers offer rather cheap tunnels for this purpose. i once had a free (ipv6) tunnel at hurricane electronic (besides another one for IPv4) but now i use VMs in data centers.

    i do not see any ready product to be that flexible. however to me the best ready router system seems to be openwrt, you are not bound to a hardware vendor, get security updates longer than with any commercial product, can 1:1 copy your config to a new device even if the hardware changes and has the possibility to add packages with special features to it.

    “openwrt” is IMHO the most flexible ready solution for longtime use. same as “pfsense” is also very worth looking at and has some similarities to openwrt while beeing different.



  • smb@lemmy.mltoLinux@lemmy.mlBtw
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    6
    ·
    3 months ago

    woman would take care for a literal horse instead of going to therapy. i don’t see anything wrong there either.

    just a horse is way more expensive, cannot be put aside for a week on vacations (could a notebook be put aside?) and one cannot make backups of horses or carry them with you when visiting friends. Horses are way more cute, though.


  • sorry if i might repeat someones answer, i did not read everything.

    it seems you want it for “work” that assumes that stability and maybe something like LTS is dort of the way to go. This also assumes older but stable packages. maybe better choose a distro that separates new features from bugfixes, this removes most of the hassle that comes with rolling release (like every single bugfix comes with two more new bugs, one removal/incompatible change of a feature that you relied on and at least one feature that cripples stability or performance whilst you cannot deactivate it… yet…)

    likely there is at least some software you most likely want to update out of regular package repos, like i did for years with chromium, firefox and thunderbird using some shellscript that compared current version with latest remote to download and unpack it if needed.

    however maybe some things NEED a newer system than you currently have, thus if you need such software, maybe consider to run something in VMs maybe using ssh and X11 forwarding (oh my, i still don’t use/need wayland *haha)

    as for me, i like to have some things shared anyway like my emails on an IMAP store accessible from my mobile devices and some files synced across devices using nextcloud. maybe think outside the box from the beginning. no arch-like OS gives you the stability that the already years-long-hung things like debian redhat/centos offer, but be aware that some OSes might suddenly change to rolling release (like centos i believe) or include rolling-release software made by third parties without respecting their own rules about unstable/testing/stable branches and thus might cripple their stability by such decisions. better stay up to date if what you update to really is what you want.

    but for stability (like at work) there is nothing more practical than ancient packages that still get security fixes.

    roundabout the last 15 years or more i only reinstalled my workstation or laptop for:

    • hardware problems, mostly aged disk like ssd wearlevel down (while recovery from backup or direct syncing is not reinstalling right?)
    • OS becomes EOL. thats it.

    if you choose to run servers and services like imap and/or nextcloud, there is some gain in quickly switching the workstation without having to clone/copy everything but only place some configs there and you’re done.

    A multi-OS setup is more likely to cover “all” needs while tools like x2vnc exist and can be very handy then, i nearly forgot that i was working on two very different systems, when i had such a setup.

    I would suggest to make recovery easy, maybe put everything on a raid1 and make sure you have on offsite and an offline backup with snapshots, so in case of something breaks you just need to replace hardware. thats the stability i want for the tools i work with at least.

    if you want to use a rolling release OS for something work related i would suggest to make sure no one externally (their repo, package manager etc) could ever prevent you from reinstalling that exact version you had at that exact point in time (snapshots from repos install media etc). then put everything in something like ansible and try out that reapplying old snapshots is straight forward for you, then (and not earlier) i would suggest that those OSes are ok for something you consider to be as important as “work”. i tried arch linux at a time when they already stopped supporting the old installer while the “new” installer wasn’t yet ready at all for use, thus i never really got into longterm use of archlinux for something i rely on, bcause i could’nt even install the second machine with the then broken install procedure *haha

    i believe one should consider to NOT tinker too much on the workstation. having to fix something you personally broke “before” beeing able to work on sth important is the opposite of awesome. better have a second machine instead, swappable harddrive or use VMs.

    The exact OS is IMHO not important, i personally use devuan as it is not affected by some instability annoyances that are present in ubuntu and probably some more distros that use that same software. at work we monitor some of those bugs of that software. within ubuntu cause it creates extra hassle and we workaround those so its mostly just a buggy annoying thing visible in monitoring.


  • my 2 cents just in case…:

    A raid6 is not a replacement for backup ;-) i use rdiff-backup which is easy to use, stores only one full backup and all increments are to the past while it is only possible to delete the oldest increments (afaik no “merging”) i never needed anything else. The backup should be one off-site and another one offline to be synced once in a while manually. Make complete dumps (including triggers, etc) from databases before doing the backup ;-)

    i like to have a recreateable server setup, like setting it up manually, then putting everything i did into ansilbe, try to recreate a “spare” server using ansible and the backup, test everything and you can be sure you also have “documented” your setup to a good degree.

    for hardware i do not have much assumptions about performance (until it hits me), but an always-running in-house server should better safe power (i learned this the costly way). it is possible to turn cpu’s off and run only on one cpu with only a reduced freq in times without performance needs, that could help a bit, at least it would feel good to do so while turning cpu’s on again + set higher frequency is quick and can be easily scripted.

    hard drives: make sure you buy 24/7, they are usually way more hassle-free than the consumer grades and likely “only” cost double the price. i would always place the system on SSD but always as raid1 (not raid6), while the “other” could then maybe be a magnetic one set to write-mostly.

    as i do not buy “server” hardware for my home server, i always buy the components twice when i change something, so that i would have the spare parts ready at hand when i need it. running a server for 5+ years often ends up in not beeing able to buy the same again, and then you have to first search what you want, order, test, maybe send back as it might not fit… instable memory? mainboard released smoke signs? with spare parts at hand, a matter of minutes! only thing i am missing with my consumer grade home server hardware is ecc ram :-/

    for cooling i like to use a 12cm fan and only power it with 5v (instead of the 12v it wants) so that it runs smoothly slow and nearly as silent as a passive only cooling, but heat does not build up in the summer. do not forget to clean the dust once in a while… i never had a 5v powered 12V-12cm fan that had any problems with the bearings and i think one of them ran for over a decade. i think the 12volt fans last longer with 5v, but no warranty from me ;-)

    even with headless i like to have a quick way at hand to get to a console in case of network might not be working. i once used a serial cable and my notebook, then a small monitor/keyboard, now i use pikvm and could look to my servers physical console from my mobile phone (but would need ssl client certificate and TOTP to do so) but this involves network, i know XD

    you likely want smart monitoring and once in a while run memtest.

    for servers i also like to have some monitoring that could push a message to my phone somehow for some foreseeable conditions that i would like to handle manually.

    debsums, logcheck logwatch and fail2ban are also worth looking at depending on what you want.

    also after updating packages, have a look at lsof | egrep “DEL|deleted” to see what programs need a simple restart to really use libraries that have been updated. so reboots only for newer kernels.

    ok this is more than 2 cents, maybe 5. never mind

    hope these ideas help a bit



  • smb@lemmy.mltoLinux@lemmy.mlWhen do I actually need a firewall?
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    5 months ago

    so here are some reasons for having a firewall on a computer, i did not read in the thread (could have missed them) i have already written this but then lost the text again before it was saved :( so here a compact version:

    • having a second layer of defence, to prevent some of the direct impact of i.e. supply chain attacks like “upgrading” to an malicously manipulated version.
    • control things tightly and report strange behaviour as an early warning sign ‘if’ something happens, no matter if attacks or bugs.
    • learn how to tighten security and know better what to do in case you need it some day.
    • sleep more comfortable when knowing what you have done or prevented
    • compliance to some laws or customers buzzword matching whishes
    • the fun to do because you can
    • getting in touch with real life side quests, that you would never be aware of if you did not actively practiced by hardening your system.

    one side quest example i stumbled upon: imagine an attacker has ccompromised the vendor of a software you use on your machine. this software connects to some port eventually, but pings the target first before doing so (whatever! you say). from time to time the ping does not go to the correct 11.22.33.44 of the service (weather app maybe) but to 0.11.22.33 looks like a bug you say, never mind.

    could be something different. pinging an IP that does not exist ensures that the connection tracking of your router keeps the entry until it expires, opening a time window that is much easier to hit even if clocks are a bit out of sync.

    also as the attacker knows the IP that gets pinged (but its an outbound connection to an unreachable IP you say what could go wrong?)

    lets assume the attacker knows the external IP of your router by other means (i.e. you’ve send an email to the attacker and your freemail provider hands over your external router address to him inside of an email received header, or the manipulated software updates an dyndns address, or the attacker just guesses your router has an address of your providers dial up range, no matter what.)

    so the attacker knows when and from where (or what range) you will ping an unreachable IP address in exact what timeframe (the software running from cron, or in user space and pings at exact timeframes to the “buggy” IP address) Then within that timeframe the attacker sends you an icmp unreachable packet to your routers external address, and puts the known buggy IP in the payload as the address that is unreachable. the router machtes the payload of the package, recognizes it is related to the known connection tracking entry and forwards the icmp unreachable to your workstation which in turn gives your application the information that the IP address of the attacker informs you that the buggy IP 0.11.22.33 cannot be reached by him. as the source IP of that packet is the IP of the attacker, that software can then open a TCP connection to that IP on port 443 and follow the instructions the attacker sends to it. Sure the attacker needs that backdoor already to exist and run on your workstation, and to know or guess your external IP address, but the actual behaviour of the software looks like normal, a bit buggy maybe, but there are exactly no informations within the software where the command and control server would be, only that it would respond to the icmp unreachable packet it would eventually receive. all connections are outgoing, but the attacker “connects” to his backdoor on your workstation through your NAT “Firewall” as if it did not exist while hiding the backdoor behind an occasional ping to an address that does not respond, either because the IP does not exist, or because it cannot respond due to DDos attack on the 100% sane IP that actually belongs to the service the App legitimately connects to or to a maintenance window, the provider of the manipulated software officially announces. the attacker just needs the IP to not respond or slooowly to increase the timeframe of connecting to his backdoor on your workstation before your router deletes the connectiin tracking entry of that unlucky ping.

    if you don’t understand how that example works, that is absolutely normal and i might be bad in explaining too. thinking out of the box around corners that only sometimes are corners to think around and only under very specific circumstances that could happen by chance, or could be directly or indirectly under control of the attacker while only revealing the attackers location in the exact moment of connection is not an easy task and can really destroy the feeling of achievable security (aka believe to have some “control”) but this is not a common attack vector, only maybe an advanced one.

    sometimes side quests can be more “informative” than the main course ;-) so i would put that (“learn more”, not the example above) as the main good reason to install a firewall and other security measures on your pc even if you’ld think you’re okay without it.


  • This is most likely a result of my original post being too vague – which is, of course, entirely my fault.

    Never mind, and i got distracted and carried away a bit from your question by the course the messages had taken

    What is your example in response to?

    i thought it could possibly help clarifying something, sort of it did i guess.

    Are you referring to an application layer firewall like, for example, OpenSnitch?

    no, i do not conside a proxy like squid to be an “application level firewall” (but i fon’t know opensnitch however), i would just limit outbound connections to some fqdn’s per authenticated client and ensure the connection only goes to where the fqdns actually point to. like an atracker could create a weather applet that “needs” https access to f.oreca.st, but implements a backdoor that silently connects to a static ip using https. with such a proxy, f.oreca.st would be available to the applet, but the other ip not as it is not included in the acl, neither as fqdn nor as an ip. if you like to say this is an application layer firewall ok, but i dont think so, its just a proxy with acls to me that only checks for allowed destination and if the response has some http headers (like 200 ok) but not really more. yet it can make it harder for some attackers to gain the control they are after ;-)



  • But the point that I was trying to make was that that would then also block you from using SSH. If you want to connect to any external service, you need to open a port for it, and if there’s an open port, then there’s a opening for unintended escape.

    now i have the feeling as if there might be a misunderstanding of what “ports” are and what an “open” port actually is. Or i just dont get what you want. i am not on your server/workstation thus i cannot even try to connect TO an external service “from” your machine. i can do so from MY machine to other machines as i like and if those allow me, but you cannot do anything against that unless that other machine happens to be actually yours (or you own a router that happens to be on my path to where i connect to)

    lets try something. your machine A has ssh service running my machine B has ssh and another machine C has ssh.

    users on the machines are a b c , the machine letters but in small. what should be possible and what not? like: “a can connect to B using ssh” “a can not connect to C using ssh (forbidden by A)” “a can not connect to C using ssh (forbidden by C)” […]

    so what is your scenario? what do you want to prevent?

    I don’t fully understand what this is trying to accomplish.

    accomplish control (allow/block/report) over who or what on my machine can connect to the outside world (using http/s) and to exactly where, but independant of ip addresses but using domains to allow or deny on a per user/application + domain combonation while not having to update ip based rules that could quickly outdate anyway.