• 0 Posts
  • 37 Comments
Joined 1 year ago
cake
Cake day: November 20th, 2024

help-circle


  • In fact, I think you’d be better off writing a deep dive into what/how environment variables work at build time, and also invoking commands on the CLI.

    But LD_PRELOAD doesn’t really have much to do with build time behavior (unless you’re talking about replacing parts of the compiler) - it allows you to force a shared library to be loaded with higher priority than anything else, so it overrides symbols from other libraries.

    It is recognized and used by Linux’s dynamic linker, which is run-time, not build-time.


  • Yes, the DE-specific implementations is pointless (as far as I know, I use a WM), but the XDG implementation is actually used first, and the function returns true if any impl returns true, like xdg() || gnome() || gnome_old() || kde().

    True, I must’ve read the code wrong when making the comment.

    This isn’t that bad?

    Yes, which is why I take issue with a PR (or rather what should have been a PR) that introduces crap code with clearly visible low effort improvements - the submitter should’ve already done that so the project doesn’t unnecessarily gain technical debt by accepting the change.

    With multiple impls, you have to resolve conflicts somehow.

    Yep, that’s why I think it’s important for the implementations to actually differentiate between light and fail state - that’s the smallest change and allows you to keep the whole detection logic in the individual implementations. Combine that with XDG being the default/first one and you get something reasonable (in a world where the separate implementations are necessary). You do mention this, but I feel like the whole two paragraphs are just expanding on this idea.

    But it’s better to criticize the code’s actual faults (…)

    I made a mistake with the order in which the implementations are called, but I consider the rest of the comment to still stand and the criticisms to be valid.



  • Well, the detection is broken for KDE and backwards in the XDG implementation (which is also only used as a fallback when the three DE-specific implementations fail, even though all of them actually support XDG so having separate implementations is pointless).

    Also with the way it’s implemented, it will have unexpected results for users who have both KDE and Gnome installed (or at least have leftover configuration files) - if you for example used KDE in the past with a theme considered to be “dark” by this and now use Gnome and have it set to light mode, you will get dark mode GZdoom with no obvious reason why.

    Oh and the XDG implementation is also very fragile and will not work on everyone’s system because it depends on a specific terminal utility being installed. The proper way would be to use a DBus library and get the settings through that.

    And when somebody comes to fix it, they will have to figure out a) what’s so special about the DE-specific implementations that XDG wasn’t enough (they might just assume that XDG isn’t supported widely enough), b) learn how to detect dark theme properly on the DE they’re fixing, c) rework the code so that there is a difference between “this DE wants light mode” and “couldn’t figure out of this DE is in light or dark mode” - both of these are now represented by the “false” return value.

    I don’t think a well written and functioning code made with AI assistance would get a response this strong, but the problem here is that the code is objectively bad and its (co-)author kept doubling down about something they probably barely even checked.


  • Don’t know about the UK, but in central Europe it’s common for houses to get three phase power that can then be used on 400V three phase circuits and gets split (ideally evenly) into 240V circuits. And the fact that the phases have effectively zero coupling means that you also need to just try the adapter to find out if it’s going to work or not unless you happen to know how exactly your house is wired up, just like with split phase power.

    Apartments usually get a single phase though, but IMHO it’s also less likely that WiFi won’t be enough there, so it’s questionable if that’s even a point for powerline.










  • Honestly, this is not really technobabble. If you imagine a user with a poor grasp of namespaces following a few different poorly written guides, then this question seems plausible and makes sense.

    The situation would be something like this: the user wants to look at the container’s “root” filesystem (maybe they even want to change files in the container by mounting the image and navigating there with a file manager, not realizing that this won’t work). So they follow a guide to mount a container image into the current namespace, and successfully mount the image.

    For the file explorer, they use pcmanfm, and for some reason decided to install it through Flatpak - maybe they use an immutable distro (containers on Steam Deck?). They gave it full filesystem access (with user privileges, of course), because that makes sense for a file explorer. But they started it before mounting the container image, so it won’t see new mounts created after it was started.

    So now they have the container image mounted, have successfully navigated to the directory into which they mounted it, and pcmanfm shows an empty folder. Add a slight confusion about the purpose of xdg-open (it does sound like something that opens files, right?), and you get the question you made up.


  • Maybe a good option for projects that you don’t want anyone else to contribute to, but then why make them open source in the first place?

    Because, at least to some people, open source is more about user freedom (to modify the software and share the modifications with anyone they wish) and less about collaboration.

    For example every time I publish some simple utility that I wrote for myself and decided could be useful for other people, I release it under a reasonable open source license and pretty much forget about it - I’m not going to be accepting merge requests, I don’t have time to maintain random tiny projects. If I ever need to use the utility for something it doesn’t quite do, I’ll check if any of the forks seem to have implemented it. If not, I’ll just implement it in my repo.

    The reason I’m publishing the code is because I know how much it sucks when you find some proprietary freeware utility that almost does what you need, but you can’t fix it for your usecase on account of it being proprietary for no reason (well, author’s choice is the reason, and I respect it, but it’s still annoying)


  • Virtual memory isn’t swap, it is a mechanism that allows the operating system to give processes a view of memory that is almost completely decoupled from real physical memory and other processes. For example some programs require their code and data to be placed at exact memory locations in order to work - virtual memory allows you to run as many of these programs as you wish, because one process’s address 0x1000 has nothing to do with another one’s 0x1000, unless they set it up as shared memory (but even the same chunk of shared memory might be mapped to different addresses in the processes that share it).

    Swapping is a cool trick that you can do with virtual memory, though. Basically you store a piece of memory somewhere outside the physical memory, and then make the address invalid in virtual memory. When the process tries to access it, it will crash. The OS will be notified of the crash, see that it was due to the process trying to access swapped out memory, load the chunk back from disk (maybe to a different physical location), update the virtual memory to correctly point to this chunk, and restart the crashed process from the instruction that caused the crash. So from the point of view of the process, nothing went wrong at all, except that one instruction took a very long time to execute.

    Also, isn’t it harmful to SSDs?

    Swapping doesn’t do enough writes to matter, unless your system is running really low on RAM.