• 44 Posts
  • 231 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle

  • Yeah, but it lacks the tree that tends to support more specialization. I still get on the EEVBlog forum from time to time but that kind of concentration of specialization is just not the default.

    To replicate that kind of ecosystem I think the platform would need a similar complex branching hierarchy and far more effective utility for searching. The element of time is too prioritized on a link aggregator like Lemmy. Community depth of specialization remains shallow because more intellectual engagement is slower and the mechanics of most recent comment engagement are not effective/implemented. Places like the EEVBlog often have the most engagement on very old threads that also concentrate a ton of history and useful information within the single thread. These threads are the primary anchor for the whole community. I think it would take some novel innovation to bridge a link aggregator’s ADHD with a forum’s depth and utility.


  • I’ve had this happen with AI stuff that runs in a Python venv. It only happens with apps that use multi threading, and usually when something is interrupted in an unintended or unaccounted for way. I usually see it when I start screwing with code stuff, but also from changing the softmax settings during generation or crashing other stuff while hacking around. There may be a bug of some kind, but I think it likely has more to do with killing the root threading process and leaving an abandoned child that doesn’t get handled by the kernel process scheduler in the standard way. If this happens I restart too.




  • Wow:

    Oleksiy Protas

    P.S. “Don’t feed the trolls”

    Don’t you worry. Our friend here tried to reply to this message, he did so twice in fact with slightly different wording, but it was full of political rage and tu quoque so I assume he fell victim to the spam filter thanks to you special counter-baiting operation so to speak.

    That aside, I did a very superficial search and it seems that the original author had already had a pull being rejected on the grounds it was coming straight from his Baikal credentials. It’s a real pity that an apparently very able engineer is just playing pretend despite knowing full well why is it so that LF migh not want to be associated with Baikal in any way.

    Serge Semin

    Hello Linux-kernel community,

    I am sure you have already heard the news caused by the recent Greg’ commit 6e90b675cf942e (“MAINTAINERS: Remove some entries due to various compliance requirements.”). As you may have noticed the change concerned some of the Ru-related developers removal from the list of the official kernel maintainers, including me.

    The community members rightly noted that the quite short commit log contained very vague terms with no explicit change justification. No matter how hard I tried to get more details about the reason, alas the senior maintainer I was discussing the matter with haven’t given an explanation to what compliance requirements that was. I won’t cite the exact emails text since it was a private messaging, but the key words are “sanctions”, “sorry”, “nothing I can do”, “talk to your (company) lawyer”… I can’t say for all the guys affected by the change, but my work for the community has been purely volunteer for more than a year now (and less than half of it had been payable before that). For that reason I have no any (company) lawyer to talk to, and honestly after the way the patch has been merged in I don’t really want to now. Silently, behind everyone’s back, bypassing the standard patch-review process, with no affected developers/subsystem notified - it’s indeed the worse way to do what has been done. No gratitude, no credits to the developers for all these years of the devoted work for the community. No matter the reason of the situation but haven’t we deserved more than that? Adding to the GREDITS file at least, no?..

    I can’t believe the kernel senior maintainers didn’t consider that the patch wouldn’t go unnoticed, and the situation might get out of control with unpredictable results for the community, if not straight away then in the middle or long term perspective. I am sure there have been plenty ways to solve the problem less harmfully, but they decided to take the easiest path. Alas what’s done is done. A bifurcation point slightly initiated a year ago has just been fully implemented. The reason of the situation is obviously in the political ground which in this case surely shatters a basement the community has been built on in the first place. If so then God knows what might be next (who else might be sanctioned…), but the implemented move clearly sends a bad signal to the Linux community new comers, to the already working volunteers and hobbyists like me.

    Thus even if it was still possible for me to send patches or perform some reviews, after what has been done my motivation to do that as a volunteer has simply vanished. (I might be doing a commercial upstreaming in future though). But before saying goodbye I’d like to express my gratitude to all the community members I have been lucky to work with during all these years. Specifically:

    NTB-folks, Jon, Dave, Allen. NTB was my starting point in the kernel upstream work. Thanks for the initial advices and despite of very-very-very tough reviews with several complete patchset refactorings, I learned a lot back then. That experience helped me afterwards. Thanks a lot for that. BTW since then I’ve got several thank-you letters for the IDT NTB and IDT EEPROM drivers. If not for you it wouldn’t have been possible.

    Andy, it’s hard to remember who else would have given me more on my Linux kernel journey as you have. We first met in the I2C subsystem review of my DW I2C driver patches. Afterwards we’ve got to be frequently meeting here and there - GPIO, SPI, TTY, DMA, NET, etc, clean/fixes/features patch(set)s. Quite heat discussions in your first reviews drove me crazy really. But all the time we managed to come up with some consensus somehow. And you never quit the discussions calmly explaining your point over and over. You never refused to provide more detailed justification to your requests/comments even though you didn’t have to. Thanks to that I learned how to be patient to reviewers and reviewees. And of course thank you for the Linux-kernel knowledges and all the tips and tricks you shared.

    Linus (Walleij), after you merged one of my pretty much heavy patchset in you suggested to me to continue the DW APB GPIO driver maintaining. It was a first time I was asked to maintain a not-my driver. Thank you for the trust. I’ll never forget that.

    Mark, thank you very much for entrusting the DW APB SSI driver maintenance to me. I’ve put a lot of efforts into making it more generic and less errors-prune, especially when it comes working under a DMA-engine control or working in the mem-ops mode. I am sure the results have been beneficial to a lot of DW SPI-controller users since then.

    Damien, our first and last meeting was at my generic AHCI-platform and DW AHCI SATA driver patches review. You didn’t make it a quick and easy path. But still all the reviews comments were purely on the technical basis, and the patches were eventually merged in. Thank you for your time and experience I’ve got from the reviews.

    Paul, Thomas, Arnd, Jiaxun, we met several times in the mailing list during my MIPS P5600 patches and just generic MIPS patches review. It was always a pleasure to discuss the matters with such brilliant experts in the field. Alas I’ve spent too much time working on the patches for another subsystems and failed to submit all the MIPS-related bits. Sorry I didn’t keep my promise, but as you can see the circumstances have suddenly drawn its own deadline.

    Bjorn, Mani, we were working quite a lot with you in the framework of the DW PCIe RC drivers. You reviewed my patches. I helped you to review another patches for some time. Despite of some arguing it was always a pleasure to work with you. Mani, special thanks for the cooperative DW eDMA driver maintenance. I think we were doing a great work together.

    Paolo, Jakub, David, Andrew, Vladimir, Russell. The network subsystem and particularly the STMMAC driver (no doubt the driver sucks) have turned to be a kind of obstacle on which my current Linux-kernel activity has stopped. I really hope that at least in some way my help with the incoming STMMAC and DW XPCS patches reviews lightened up your maintainance duty. I know Russell might disagree, but I honestly think that all our discussions were useful after all, at least for me. I also think we did a great work working together with Russell on the DW GMAC/QoS ETH PCS patches. Hopefully you’ll find a time to finish it up after all.

    Rob, Krzysztof, from your reviews I’ve learned a lot about the most hardwary part of the kernel - DT sources and DT-bindings. All your comments have been laconic and straight to the point. That made reviews quick and easy. Thank you very much for that.

    Guenter, special thanks for reviewing and accepting my patches to the hwmon and watchdog subsystems. It was pleasure to be working with you.

    Borislav, we disagreed and argued a lot. So my DW uMCTL2 DDRC EDAC patches even got stuck in limbo for quite a long time. Anyway thank you for the time you spent reviewing my patches and trying to explain your point.

    • Borislav, it looks like I won’t be able to work on my Synopsys EDAC patchsets anymore. If you or somebody else could pick them up and finish up the work it would be great (you can find it in the lore archive). The patches convert the mainly Zynq(MP)-specific Synopsys EDAC driver to supporting the generic DW uMCTL2 DDRC. It would be very beneficial for each platform based on that controller.

    Greg, we met several times in the mailing lists. You reviewed my patches sent for the USB and TTY subsystems, and all the time the process was straight, highly professional, and simpler than in the most of my other case. Thank you very much for that.

    Yoshihiro, Keguang, Yanteng, Kory, Cai and everybody I was lucky to meet in the kernel mailing lists, but forgot to mention here. Thank you for the time spent for our cooperative work on making the Linux kernel better. It was a pleasure to meet you here.

    I also wish to say huge thanks to the community members trying to defend the kicked off maintainers and for support you expressed in these days. It means a lot.

    A little bit statics of my kernel-work at the end:

    Signed-off patches: 518 Reviewed and Acked patches: 253 Tested patches: 80

    Best Regards, -Serge(y)



  • j4k3@lemmy.worldtoLinux@lemmy.mlSo all of my drivers are breaking
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    24 days ago

    That is what I meant by configure. You’re not going to HP to download your printer driver or realtek to get one for your network adaptor. To the end user, the kernel includes the required modules, or it is a matter of simple configurations. The exception being proprietary garbage. However with Nvidia on Fedora, it is a non issue as the Anaconda system builds the Nvidia module from source with every kernel update from outside of the kernel but under the shim, so even secure boot works.

    The OP was not asking computer science OS 101. My reply is just intended as a surface level to cause them to question the drivers mentality. I’ve seen many people follow this logic and not get anywhere.


  • j4k3@lemmy.worldtoLinux@lemmy.mlSo all of my drivers are breaking
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    24 days ago

    Indeed, gaps are present in my knowledge. I understand what you wrote, in theory, but vaguely based on my reading from a forum on kernel architectures several years ago. I’m most familiar with the user experience of configuring a custom Linux kernel with Gentoo versus needing a WiFi driver that I need WiFi access to source.

    Since you are touching on a gap in my knowledge, perhaps a more recent issue and curiosity will help me ground this a little better if you do not mind responding. What is the deal with secure boot and Windows drivers? How are they able to run some random driver from the internet that has DMA?



  • j4k3@lemmy.worldtoLinux@lemmy.mlSo all of my drivers are breaking
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    6
    ·
    24 days ago

    That sounds like a hardware issue.

    Keep in mind that Linux is a monolithic kernel. It doesn’t technically have drivers at all or go missing. All supporting kernel modules for hardware are always present at the configuration level. The general kernels shipped by distros are configured to work out of the box for most hardware. The only exceptions should be instances where oddball hardware can cause conflicts with the standard way other hardware works in the same space. Then there are cases where hardware is totally undocumented publicly by the chip manufacturers. That is the worst kind as some of those have poor or no support.

    By contrast, Windows is a microkernel. It only creates an API layer for the hardware vendor to write a driver that interfaces with Windows. They leave it entirely up to the end user to get stuck in the middle, source and install the driver and deal with any potential issues. In other words they don’t have devs to maintain or do anything meaningful in this space, and they enable undocumented proprietary crap hardware.





  • You’ve got to do some manual config. I know about it but don’t use it. You can redirect home folders with the container in the distrobox create flags. I think the better option is to use the user/groups/SELinux context in addition to the container as this will show up in ownership and is more easy to trace. One of my main problems is how packages have Python installation requirements that by default try to break pip out of any containerized context and create their own venv setup. It totally screws up the whole distrobox container setup and separation from the base system.


  • With Linux over the years, I have learned to ignore all hardware marketing as (basically) scammers. The supporting software is the important part. If the software is not open source, the product is only available to rent and likely includes or has the potential to become an extortion scam of subscription parasites. When I shop for products now, I do so by searching for the open source software first. Once I find a large project with several contributors, I git clone the repo and then I run an app called gource on the command line. Gource creates a 3d visualization of the project over time and its commit history. Have a look at the Linux kernel some time or just watch a video of someone that has uploaded the visualization: https://www.youtube.com/watch?v=5iFnzr73XXk

    With the actual visualization, you can zoom in and select the individuals or watch branches specifically. The trick is to get an idea of who the main contributors are in the various spaces and how consistent they are. Find who is working on what hardware and how they are working on it. Some times you’ll see a person comes in and only makes a single commit or a few that contain everything for a device and then they disappear. These are often subcontracted devs that a company hires and gives a checklist. Issues, bugs, and unsupported features are unlikely to get fixed unless you see someone else that is making commits in this space. What you’re really looking for is one of the main project devs that makes ongoing commits to some specific hardware over longer amounts of time and fairly recently. It means they have the device in question. That generally means the device has or will have excellent support in the long term. It also generally means the person either really liked the product or the company is smart enough to supply the dev with the device or supporting documentation.

    Sorry if this seems unsolicited. It took me a long time to break out of the hardware spec shopping fallacy and all of the troubles it can cause. Prioritizing true ownership and shopping for the software first is a far more enjoyable life experience. It likely won’t help in this niche, but for computers in general use: https://linux-hardware.org/

    You will likely find that search engines attempt to obfuscate this information. Expect that. Use offline open source LLM’s, ask the community, or more advance searching methods to find relevant info. Both m$ and the goo are the two biggest beneficiaries of the proprietary software ecosystem and they are the only two web crawlers that exist at relevant scale. All search engines use one or both of these sources either directly or by proxy.


  • TBH: tl;dr (…but read ~1/4 and skimmed the rest.)

    Emacs can likely do most, if not all, of what you’re looking for.

    As far as distros, go with either Fedora Workstation or Silverblue. If you can run SB, try to avoid messing with the base system as much as possible, skip using the toolbox containers system and just use distrobox. With distrobox, you have almost all Linux distros available as containers, so you build on them. The only exception I know of is NIX. You can’t run NIX in distrobox. You probably could run the NIX package manager, but that involves this weird setup where a user owned directory exists in / root. Personally, this is just too weird for me to use it. I expect all user activity and configuration files to be confined to /home/$USER/

    Fedora just works, but try and lag behind the release cycle a little bit. Like right now F40 is pretty solid, but there were some issues in the first month or so after F40 first came out. I have lagged in every release since ~F28 and never had issues. I switched to F40 within the first week or so and a few packages were wonky. Basically Python was super fresh and did some odd stuff with containers where it did not work without manually removing and replacing Python in each container. I think that was the only manual intervention issue I’ve had with Fedora. I have a 3080Ti laptop with the 16 GB GPU. The Anaconda system in Fedora builds the Nvidia kernel module automatically in the background each time the kernel is updated. It works flawlessly, even with secure boot enabled.




  • Primarily from predatory boys and men towards girls and young women in the real world by portraying them in imagery of themselves or with others. The most powerful filtering is in place to make this more difficult.

    Whether intentional or not, most NSFW LoRA training seems to be trying to override the built in filtering in very specific areas. These are still useful for more direct momentum into something specific. However, once the filters are removed, it is far more capable of creating whatever you ask for as is, from celebrities, to anything lewd. I did a bit of testing earlier with some LoRAs and no prompt at all. It was interesting that it could take a celebrity and convert their gender in recognizable ways that were surprising. I got a few on random seeds, but I haven’t been able to make that one happen with a prompt or deterministically.

    Edit: I’m probably assuming too much about other people’s knowledge on these systems. I assume this is the down voting motivation. Talking about this aspect, the NSFW junk is shorthand for the issues with AI generation. These are the primary form of filtering and it has large cascading implications elsewhere. By stating what is possible in this area, I’m implying a worst case scenario-like example. If the results in this area are a certain way, it says volumes about other areas and how the model will react.

    These filter layers are stupid simplistic in comparison to the actual model. They have tensors on the order of a few thousand parameters per layer compared to tens of millions of parameters per layer for the actual model. They shove tons of stuff into guttered like responses for no reason. Some times these average out and you still get a good output, but other times they do not.

    Another key point here is that diffusion has a lot in common with text generation when it comes to this part of the model loader code. There is more complexity in what text generation is doing overall, but diffusion is an effective way to learn a lot about how text gen works, especially with training. This is my primary reason for playing with diffusion – to learn about training. I’ve tried training for text gen, but it is very difficult to assess what is happening under the surface, like when it is learning overall style, character traits and personas, pacing, creativity, timeline, history, scope, constraints, etc. etc. I don’t care to generate and share much in the way of imagery I generate unless I’m trying to do something specific that is interesting. Like I tried to gen the interior of an O’Neill cylinder space habitat that illustrated the limitations of diffusion in a fundamental way because it showed the lack of any reasoning or understanding of object context or relationships required to display a scene scape with curved centrifugal artificial spin gravity.

    Anyways, my interests are not in generating NSFW or celebrities or whatnot. I do not think people should do these things. My primary interest is returning to creative writing with an AI collaborative writing partner that is not biased politically in a way that cripples it from participating in an entirely different and unrelated cultural and political landscape. I have no aspirations of finding success in my writing. I simply enjoy exploring my own science fiction universe and imagining a reality many thousands of years from now. One of the changes to hard coded model filters earlier this year made filtering more persistent, likely for NSFW stuff. I get it, and support it, but it took away one of the few things I have really enjoyed over the last 10 years of social isolation and disability, so I’ve tried to get that back. Sorry if that offends someone, but I don’t understand why it would. This was not my intended reason for this post, so I did not explain it in depth. The negativity here is disturbing to me. This place is my only real way to interact with other humans.