• 0 Posts
  • 84 Comments
Joined 1 year ago
cake
Cake day: June 23rd, 2023

help-circle
  • Ouch yeah that windows endpoint stuff is really rattling though. I get you just can’t whitelist some folder without compromising security, but when the “eNdPoInt pRoTeCtIon” just removes dlls and exes you are compiling (and makes your PC crawl) you really hate that shit.

    Right click? 40 seconds plz (maybe any of the possible contextual right clicks might be on a virus so lets just check them all once again).

    At home I have an old linux pc, and it blows those corpo super pcs out the window.

    Rant off :-D

    Ah yeah, IT people are chill, always be cool with them is also a good idea, not their fault all this crap exists.











  • I hate jira because it slots your work stupidly by the management, or so I feel it.

    A manager usually works with time slots, say 8 a day (or whatever), they are all mostly disconnected, like do meeting with A, go to standup of team B, PMD for dev C etc etc. Dev isn’t like that but everyone seems to start thinking it is: how many “items” was finalised last “sprint” etc and other stupid metrics.

    Am I alone here or is there even worse things with jira in your opinions?


  • Thanks!

    IPFS is static, whereas tenfingers is dynamic when it comes to the links. So you can update the shared data without the need of redistributing the link.

    That said, its also very different tech wise, there is no need for benevolent nodes (or some crypto or payment).

    Nodes do not need to be trustworthy either, so node discovery is very simple (basically just ask other nodes for known nodes).

    The distribution part, where nodes share your data, is based on reciprocal sharing, you share theirs and they share yours. If they don’t share any more (there are checks) you just ditch the deal and ask for a new deal with another node.

    With over sharing (default is you share your data with 10 other nodes, sharing their data) this should both make bad nodes a no problem, but also make for good uptime and takedown safety.

    This system also makes it scalable infinitely node wise, as every node does not need to know all other nodes, just enough for their need (for example thousands out if millions of existing nodes).

    To share lots if data, you need to bring enough storage and bandwith to the table because it’s reciprocal, so basically it’s up to your node how much it can share.

    Big data sets are always complicated because of errors and long download times, I have done 300MB files without problems, but the download process sure can be made better (with parallel downloading for example and better error handling).

    I haven’t worked on sharing way bigger datasets, even a simple terabyte is a pita to download on the regular internet :-) and the use case is more the idea of sharing lots of smaller data, like a website for example, or a chat.

    What do you think, am I missing something important? Or of course if you have other questions please do ask!

    Also, sorry I’m writing this on my mobile so it’s not very well written.

    Edit: missed one question; getting the data is straight forward to use (a bit complicated how it’s handled because of the changing nature of things) but when you download, you have the addresses of the nodes sharing your data so you just connect to one of them and download it (or the next if the first one isn’t up etc and so on). So that should not be any kind of bottleneck.