It looks like !buildapc community isn’t super active so I apologize for posting here. Mods, let me know if I should post there instead.

I built my first PC when I was I think 10-11 years old. Built my next PC after that and then sort of moved toward pre-made HP/Dell/etc. My last PC’s mobo just gave out and I’m looking to replace the whole thing. I’ve read over the last few years that prefabs from HP/Dell/etc. have gone to shit and don’t really work like they used to. Since I’m looking to expand comfortably, I’ve been thinking of giving building my own again.

I remember when I was a young lad, that there were two big pain points when putting the rig together: motherboard alignment with the case (I shorted two mobos by having it touch the bare metal of the grounded case; not sure how that happened but it did) and CPU pin alignment so you don’t bend any pins when inserting into the socket.

Since it’s been several decades since my last build, what are some things I should be aware of? Things I should avoid?

For example, I only recently learned what M.2 SSD are. My desktop has (had) SATA 3.5" drives, only one of which is an SSD.

I’ll admit I am a bit overwhelmed by some of my choices. I’ve spent some time on pcpartpicker and feel very overwhelmed by some of the options. Most of my time is spent in code development (primarily containers and node). I am planning on installing Linux (Ubuntu, most likely) and I am hoping to tinker with some AI models, something I haven’t been able to do with my now broken desktop due to it’s age. For ML/AI, I know I’ll need some sort of GPU, knowing only that NVIDIA cards require closed-source drivers. While I fully support FOSS, I’m not a OSS purist and fully accept that using a closed source drivers for linux may not be avoidable. Happy to take recommendations on GPUs!

Since I also host a myriad of self hosted apps on my desktop, I know I’ll need to beef up my RAM (I usually go the max or at least plan for the max).

My main requirements:

  • Intel i7 processor (I’ve tried i5s and they can’t keep up with what I code; I know i9s are the latest hotness but don’t think the price is worth it; I’ve also tried AMD processors before and had terrible luck. I’m willing to try them again but I’d need a GOOD recommendation)
  • At least 3 SATA ports so that I can carry my drives over
  • At least one M.2 port (I cannibalized a laptop I recycled recently and grabbed the 1TB M.2 card)
  • On-board Ethernet/NIC (on-board wifi/bluetooth not required, but won’t complain if they have them)
  • Support at least 32 GB of RAM
  • GPU that can support some sort of ML/AI with DisplayPort (preferred)

Nice to haves:

  • MoBo with front USB 3 ports but will accept USB 2 (C vs A doesn’t matter)
  • On-board sound (I typically use headphones or bluetooth headset so I don’t need anything fancy. I mostly listen to music when I code and occasionally do video calls.)

I threw together this list: https://pcpartpicker.com/list/n6wVRK

It didn’t matter to me if it was in stock; just wanted a place to start. Advice is very much appreciated!

EDIT: WOW!! I am shocked and humbled by the great advice I’ve gotten here. And you’ve given me a boost in confidence in doing this myself. Thank you all and I’ll keep replying as I can.

  • @tabular@lemmy.world
    link
    fedilink
    English
    3
    edit-2
    1 year ago

    The responsiveness between a hard drive and an SSD is night and day. NVMe is even faster but not noticeable unless you move a hell of a lot of data around. A motherboard having at least 1 M.2 NVMe slot is common, so installing the OS on it is an option. Hard drives have more storage per price, but unless space is significant factor I suggest using SSDs (also quieter than a spinning disk!). More info on storage formats in this video

    Recent generations of motherboards use DDR5 RAM, which were very expensive on release. I think the price has come down but I am not up to date this generation. You may be able to save money making a DDR4 system but you’ll be stuck on a less supported platform.

    AMD had like ~10 years of bad/power hungry processors and Intel stagnated, re-releasing 4-core processors over and over. AMD made a big comeback with their Ryzen series becoming best bang for buck, then even over taking Intel. I think it’s pretty even now.

    If you don’t intend to game or do certain compute workloads then you can avoid buying a GPU. Integrated CPUs have come quite far (still low end compared to a dedicated GPU). Crypto mining, Covid and now AI has made the GPUs market expensive and boring. Nvidia has more higher-end cards, mid range is way more expensive for both and low end sucks ass. On Linux AMD GPUs drivers come with the OS, but Nvidia you have to get their proprietary drivers (Linux gaming has come a long way).

    • youmaynotknow
      link
      fedilink
      English
      21 year ago

      DDR5 has gone down dramatically compared to launch. You can get 64GB with a very fast bus for under 200 dollars now. At launch 32GB would easily set you back 250+. AMD has made a killing with Ryzen. Never mind the new naming convention that Intel came up with to make it even more complicated to choose the right CPU for your use cases, ridiculous. As for Nvidia GPU drivers, at the end of the day, they just work, regardless their proprietary drivers philosophy (which, again, I agree sucks). But if the OP is going to be doing AI development, machine learning and all that cool stuff, he’d be better served by getting a few CUDA TPUs. You can get those anywhere from 25 dollars to less than 100, and they come in all types (USB, PCI, M.2). https://coral.ai/products/#prototyping-products I have 1 USB Coral running the AI on my Frigate dicker for 16 cameras, and my CPU never reaches more than 12% while the TPU itself barely touches 20% utilization. You put 2 of those bad boys together, and the CPU would probably not even move from idle 🤣

      • Fubber Nuckin'
        link
        fedilink
        English
        31 year ago

        Hold on a second, how come every time i look for TPUs i get a bunch of not-for-sale nvidia and Google cards, but this just exists out there and i never heard of it?

        • youmaynotknow
          link
          fedilink
          English
          11 year ago

          I found out about those about 6 months ago only, and it was by chance while going over the UnRaid forum for Frigate, so I decided to do some research. It took me almost 4 months to finally get my paws on one. They were seriously scarce back then, but have been available for a couple of month now. I only got mine finally at the end of November. They seem to be in an availability trend similar to Raspberry Pis.

      • @CeeBee@lemmy.world
        link
        fedilink
        English
        2
        edit-2
        1 year ago

        getting a few CUDA TPUs

        https://coral.ai/products/#prototyping-products

        Those aren’t “CUDA” anything. CUDA is a parallel processing framework by Nvidia and for Nvidia’s cards.

        Also, those devices are only good for inferencing smaller models for things like object detection. They aren’t good for developing AI models (in the sense of training). And they can’t run LLMs. Maybe you can run a smaller model under 4B, but those aren’t exactly great for accuracy.

        At best you could hope for is to run a very small instruct model trained on very specific data (like robotic actions) that doesn’t need accuracy in the sense of “knowledge accuracy”.

        And completely forgot any kind of generative image stuff.

        • youmaynotknow
          link
          fedilink
          English
          1
          edit-2
          1 year ago

          Same reply. And you can add as many TPUs as you want to push it to whatever level you want. At 59 bucks a piece, they’ll blow any 4070 out of the water for the same or less cost. But to the OP, you don’t have to believe any of us. You’re in that field, I’m sure you can find the jnfo on if these would fit your needs or not.

          • @CeeBee@lemmy.world
            link
            fedilink
            English
            1
            edit-2
            1 year ago

            And you can add as many TPUs as you want to push it to whatever level you want

            No you can’t. You’re going to be limited by the number of PCI lanes. But putting that aside, those Coral TPUs don’t have any memory. Which means for each operation you need to shuffle the relevant data over the bus to the device for processing, and then back and forth again. You’re going to be doing this thousands of times per second (likely much more) and I can tell you from personal experience that running AI like is painfully slow (if you can get it to even work that way in the first place).

            You’re talking about the equivalent of buying hundreds of dollars of groceries, and then getting everything home 10km away by walking with whatever you can put in your pockets, and then doing multiple trips.

            What you’re suggesting can’t work.

            • youmaynotknow
              link
              fedilink
              English
              01 year ago

              Let’s get this out of the way. Not a single consumer grade board has more than 16 lanes on 1 PCI slot. With the exception of 2 or 3 very expensive new boards out there, you’ll be hard pressed to find a board with 3 slots giving you a total mas of 28 lanes (16+8+4). So, regardless of TPU or GPU that’s going to be your limit. GPUs are designed as general purpose processors that have to support millions of different applications and software. So while a GPU can run multiple functions at once, in order to do so, it must access registers or shared memory to read and store the intermediate calculation results. And since the GPU performs tons of parallel calculations on its thousands of ALUs, it also expends large amounts of energy in order to access memory, which in turn increases the footprint of the GPU. TPUs are application-specific integrated circuits (ASIC) designed specifically to handle the computational demands of machine learning and accelerate AI calculations and algorithms. They are created as a domain-specific architecture. What that means is that instead of designing a general purpose processor like a GPU or CPU, they were designed as a matrix processor that was specialized for neural network work loads. Since the TPU is a matrix processor instead of a general purpose processor, it removes the memory access problem that slows down GPUs and CPUs and requires them to use more processing power. Get your facts straight and read more before you try to send others on wild goose chases. As I said, the OP already works this field, it shouldn’t be hard for him to find the information and make an educated decision.

              • @CeeBee@lemmy.world
                link
                fedilink
                English
                2
                edit-2
                1 year ago

                A lot of what you said is true.

                Since the TPU is a matrix processor instead of a general purpose processor, it removes the memory access problem that slows down GPUs and CPUs and requires them to use more processing power.

                Just no. Flat out no. Just so much wrong. How does the TPU process data? How does the data get there? It needs to be shuttled back and forth over the bus. Doing this for a 1080p image with of data several times a second is fine. An uncompressed 1080p image is about 8MB. Entirely manageable.

                Edit: it’s not even 1080p, because the image would get resized to the input size. So again, 300x300x3 for the past model I could find.

                /Edit

                Look at this repo. You need to convert the models using the TFLite framework (Tensorflow Lite) which is designed for resource constrained edge devices. The max resolution for input size is 224x224x3. I would imagine it can’t handle anything larger.

                https://github.com/jveitchmichaelis/edgetpu-yolo/tree/main/data

                Now look at the official model zoo on the Google Coral website.

                https://coral.ai/models/

                Not a single model is larger than 40MB. Whereas LLMs start at well over a big for even smaller (and inaccurate) models. The good ones start at about 4GB and I frequently run models at about 20GB. The size in parameters really makes a huge difference.

                You likely/technically could run an LLM on a Coral, but you’re going to wait on the order of double-digit minutes for a basic response, of not way longer.

                It’s just not going to happen.

                • youmaynotknow
                  link
                  fedilink
                  English
                  01 year ago

                  OK mman, dont pop a vein over this. I’m a hobbyist, with some experience, but a hobbyist nonetheless. I’m speaking from personal experience, nothing else. You may well be right (and thanks for the links, they’re really good for me to learn even more).

                  I guess, at the end of the day, the OP will need to make an informed decision on what will work for him while adhering to his budget.

                  I’m glad to be here, because I can help people (at least some times) and learn at the same time.

                  I just hope the OP ends up with something that’ll fit his needs and budget. I will he adding a K80 to my rig soon, only because I can let go of 50 bucks and want to test it until it burns.

                  I wish you all a very nice weekend, and keep tweaking, its too Much fun.

    • @CosmicTurtle@lemmy.worldOP
      link
      fedilink
      English
      21 year ago

      I was really hoping to cannibalize the 32 GBs of DDR3 RAM but I couldn’t find a MoBo that supports it anymore. Then I saw DDR5 is the latest!

      I don’t really do any gaming. If I wasn’t going to tinker with AI, I’d just need a card for dual DisplayPort output. I can support HDMI but…I prefer DP

      • youmaynotknow
        link
        fedilink
        English
        11 year ago

        The 4070 TI will give you quite a few years out of it for sure. You could also completely forego the GPU and get a couple of CUDAs for a fraction of the cost. Just use the integrated graphics and you’re golden.

        • @CeeBee@lemmy.world
          link
          fedilink
          English
          31 year ago

          You could also completely forego the GPU and get a couple of CUDAs for a fraction of the cost.

          What is this sentence? How do you “get a couple of CUDA’s”?

          • youmaynotknow
            link
            fedilink
            English
            -71 year ago

            Dude, you KNOW I’m talking about TPUs. The name escaped my mind at the moment. Sorry if my English is not to your royalty level. Are you really so hired that you have to make a party out of that? Ran out of credits on pornhub or something?

          • youmaynotknow
            link
            fedilink
            English
            11 year ago

            I misspoke, and I apologize. I could not recall the term TPU, so I just went with the name of the protocol (CUDA). Nvidia has various TPU devices that use CUDA protocol (like the K80 for example). TPUs (Tensor Processing Units) are coprocessors designed to run some GPU intensive tasks without the expense of an actual GPU unit. They are not a one to one replacement, as they perform calculations in completely different ways.

            I believe you would be well served by researching a bit and then making an informed decision on what to get (TPU, GPU or both).

          • @CeeBee@lemmy.world
            link
            fedilink
            English
            51 year ago

            Are CUDAs something that I can select within pcpartpicker?

            I’m not sure what they were trying to say, but there’s no such thing as “getting a couple of CUDA’s”.

            CUDA is a framework that runs on Nvidia hardware. It’s the hardware that will have “CUDA cores” which are large amounts of low power processing units. AMD calls them “stream processors”.

  • @grue@lemmy.world
    link
    fedilink
    English
    38
    edit-2
    1 year ago

    Well, let’s see:

    • You no longer have to set jumpers to “master” or “slave” on your hard drives, both because we don’t put two drives on the same ribbon cable anymore and because the terminology is considered kinda offensive.

    • Speaking of jumpers, there’s a distinct lack of them on motherboards these days compared to the ones you’re familiar with: everything’s got to be configured in firmware instead.

    • There’s a thing called “plug 'n play” now, so you don’t have to worry about IRQ conflicts etc.

    • Make sure your power supply is “ATX”, not just “AT”. The computer has a soft on/off switch controlled through the motherboard now – the hard switch on the PSU itself can just normally stay on.

    • Cooling is a much bigger deal than it was last time you built a PC. CPUs require not just heat sinks now, but fans too! You’re even going to want some extra fans to cool the inside of the case instead of relying on the PSU fan to do it.

    • A lot more functionality is integrated onto motherboards these days, so you don’t need nearly as big a case or as many expansion slots as you used to. In fact, you could probably get by without any ISA slots at all!

    • @Spiralvortexisalie@lemmy.world
      link
      fedilink
      English
      5
      edit-2
      1 year ago

      While I love this list, it is more applicable to the turn of the century than a a decade ago. I was half expecting to see “ram no longer has to be installed in pairs” on the list.

      ETA: Talking about EDO memory not dual channel

      • @Rehwyn@lemmy.world
        link
        fedilink
        English
        10
        edit-2
        1 year ago

        I think you may have misread OPs post. They haven’t built a PC since shirtly after they were 10-11, which was almost 30 years ago. So developments since the turn of the century are in fact relevant here, heh.

  • ivanafterall
    link
    fedilink
    6
    edit-2
    1 year ago

    A lot of great suggestions here already. But nobody is mentioning that if you really want to future-proof, you should go fully quantum.

    • @CosmicTurtle@lemmy.worldOP
      link
      fedilink
      English
      141 year ago

      I actually did. And the quantum twin that succeeded is now solving global warming.

      I am the twin that didn’t succeed.

      • MxM111
        link
        fedilink
        31 year ago

        According to global trends, your twin did not succeed either. Sorry.

  • @Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    2
    edit-2
    1 year ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    NAS Network-Attached Storage
    NVMe Non-Volatile Memory Express interface for mass storage
    PCIe Peripheral Component Interconnect Express
    PSU Power Supply Unit
    RAID Redundant Array of Independent Disks for mass storage
    SATA Serial AT Attachment interface for mass storage
    SSD Solid State Drive mass storage
    ZFS Solaris/Linux filesystem focusing on data integrity

    8 acronyms in this thread; the most compressed thread commented on today has 5 acronyms.

    [Thread #529 for this sub, first seen 20th Feb 2024, 23:55] [FAQ] [Full list] [Contact] [Source code]

  • @phrogpilot73@lemmy.world
    link
    fedilink
    English
    3
    edit-2
    1 year ago

    AM5 sockets are now LGA like Intel. AM4 was the last PGA socket, so bent pins on the chip are a thing of the past. Make sure to leave the socket cover in place while installing the CPU. Now, the fear is bending a pin on the MoBo.

    • @Nollij@sopuli.xyz
      link
      fedilink
      English
      31 year ago

      I think you mean LGA (Land Grid Array), meaning the pins are on the motherboard. Ball Grid Array (BGA) is used for embedded, non-removable CPUs.

      • @phrogpilot73@lemmy.world
        link
        fedilink
        English
        11 year ago

        I did. This should teach me not to try and cook dinner and post at the same time. I’m NOT that good at multitasking…

    • @CosmicTurtle@lemmy.worldOP
      link
      fedilink
      English
      31 year ago

      I typically code a lot of back-end and processor intensive workloads. The issue I have with i5s is that they don’t seem to be as “snappy” as i7s. I’ve worked with both for good long periods of time. When I had an i5 laptop, I had to off-load a good majority of my development to the cloud because I couldn’t do containers and listen to music and run two monitors at the same time. I never had the same issue with i7 processors, even on a laptop.

  • Possibly linux
    link
    fedilink
    English
    31 year ago

    Could you buy both an AMD GPU and a Nvidia GPU? You could pass the Nvidia GPU into a VM with vfio for AI and then you could daily drive AMD with Foss drivers. (AMD is in a little less demand) There is also the option of Intel GPUs as they should work pretty well under Linux.

    I personally would avoid Ubuntu do to snaps as there are many other options. Do what you feel comfortable with but you also could go with Linux mint or Fedora both of which don’t have snap.

    For AI I’m less experienced but I would use containers as that will make the setup much nicer.

    • @CosmicTurtle@lemmy.worldOP
      link
      fedilink
      English
      21 year ago

      Two GPUs? Is that a thing? How does that work on a desktop? Honestly, if it wasn’t for my curiosity into AI, I’d just go with the onboard video though given my need for specific resolutions, I find comfort in having a dedicated card.

      I’ve been using ubuntu exclusively for 10 some years and don’t use snap at all. tbh, not even sure what snap is.

      If it’s not apt, then I don’t use it.

      • youmaynotknow
        link
        fedilink
        English
        11 year ago

        SNAP is just a proprietary packaging by Canonical. Basically the same as a flatpak, but fully controlled by Canonical, store and all. Integrated graphics will give you as much resolution as most GPUs, albeit they won’t be able to render at dedicated GPU speeds. But unless you’re actually rendering very heavy videos, integrated, matched together with 1 or more CUDA TPUs, and YOU set the limits.

      • @grue@lemmy.world
        link
        fedilink
        English
        11 year ago

        Two GPUs? Is that a thing? How does that work on a desktop?

        GPUs these days aren’t like your old Voodoo, with its daisy-chained VGA port and one-way, fixed-function graphics pipeline. They can actually send the results of their calculations back to the CPU over the PCIe bus instead of only out to the monitor!

        (In all seriousness though, you don’t actually need two GPUs.)

      • Possibly linux
        link
        fedilink
        English
        21 year ago

        You do use it as a bunch of snap packages automatically install the snap instead.

        For Nvidia I still think passthough is the best option as it isolates the Nvidia issues to a VM instead of the host. If you aren’t going to spend a bunch of time on AI then you can just use a CPU as long as you have enough ram.

        • @CosmicTurtle@lemmy.worldOP
          link
          fedilink
          English
          21 year ago

          With the conversations I’m having here, I’m leaning in the direction of integrated video (assuming I can get one with display port) and a discrete card just for AI work.

          I use VirtualBox for VMs. I’m assuming there are instructions on how to give the card to the VM? My cursory google search came up with dubious results.

          • youmaynotknow
            link
            fedilink
            English
            21 year ago

            Most new boards will have at least a display port and an HDMI port. Add that most also have Thunderbolt4, plug in an HDMI or Display Port dongle. The sky is the limit man. On the VM panorama, VMWare is now all fucked with their forced subscription model, Virrualbox is still a thing, but GPU passthrough (I’ve heard, can’t really confirm) seems to have turned into a real shitshow. KVM / Qemu seems to be the only alternative that makes sense right now.

      • @CeeBee@lemmy.world
        link
        fedilink
        English
        11 year ago

        Two* GPUs? Is that a thing? How does that work on a desktop?

        I’ve been using two GPUs in a desktop since 15 years ago. One AMD and one Nvidia (although not lately).

        It really works just the same as a single GPU. The system doesn’t really care how many you have plugged in.

        The only difference you have to care about is specifying which GPU you want a program to use.

        For example, if you had multiple Nvidia GPUs you could specify which one to use from the command line with:

        CUDA_VISIBLE_DEVICES=0

        or the first two with:

        CUDA_VISIBLE_DEVICES=0,1

        Anyways, you get the idea. It’s a thing that people do and it’s fairly simple.

      • @fuckwit_mcbumcrumble@lemmy.world
        link
        fedilink
        English
        11 year ago

        If you don’t need a lot of GPU horsepower besides the AI stuff then you could just use the integrated graphics and have a dedicated GPU for the AI stuff.

        Having multiple GPUs in your system isn’t really that special. Plug HDMI into GPU1 to make GPU1 drive your display/play games. Plug HDMI into GPU2 to have GPU2 do stuff. If you’re doing AI work then you don’t need anything connected to the GPU, the program just needs to know it’s there and to use it.

        The only thing to look out for when using the iGPU and a dGPU is that the bios doesn’t turn off the iGPU if it detects the dGPU. If you have 2 dGPUs then it shouldn’t matter outside of maybe the bios wanting to use the first one.

  • @InformalTrifle@lemmy.world
    link
    fedilink
    English
    41 year ago

    Someone might have already mentioned it, but M.2 is just a physical connector. You can have M.2 SATA or M.2 NVME drives. Prefer NVME (a modern motherboard should support it but older ones only do SATA)

  • Rimu
    link
    fedilink
    431 year ago

    GPUs these days use a whole lot of power. Ensure your power supply is specced appropriately.

    • @fuckwit_mcbumcrumble@lemmy.world
      link
      fedilink
      English
      201 year ago

      And make sure it’s an actually good PSU too.

      I know in gaming, possibly in other loads Nvidia 40 series, and especially 30 series love transient spikes which can easily exceed 2x the nominal power consumption. Make sure your PSU can handle those spikes both in terms of brevity, and current.

      • andrew
        link
        fedilink
        English
        61 year ago

        My 3090 is a light flickering machine. Kind of annoying tbh.

      • TheOneCurly
        link
        fedilink
        English
        31 year ago

        That’s what finally did in my 10 year old Corsair. I was technically within specs on wattage with my new 4070 but certain loads would cause it to trip the over current protection anyway.

  • @Ludrol@szmer.info
    link
    fedilink
    English
    101 year ago

    For AI/ML workloads the VRAM is king

    As you are starting out something older with lots of VRAM would be better than something faster with less VRAM for the same price.

    The 4060 ti is a good baseline to compare against as it has a 16GB variant

    “Minimum” VRAM for ML is around 10GB the more the better, less VRAM could be usable but with sacrefices with speed and quality.

    If you like that stuff in couple of months, you could sell the GPU that you would buy and swap it with 4090 super

    For AMD support is confusing as there is no official support for rocm (for mid range GPUs) on linux but someone said that it works.

    There is new ZLUDA that enables running CUDA workloads on ROCm

    https://www.xda-developers.com/nvidia-cuda-amd-zluda/

    I don’t have enough info to reccomend AMD cards

  • @AngryishHumanoid@reddthat.com
    link
    fedilink
    English
    61 year ago

    I went through the same process myself a couple years ago, first PC build in a while. The biggest shock for me was finding out hard drives (SSD, HHD, etc) were outdated: its all about NVMe cards which look like a stick of RAM and plug directly on the motherboard.

    • @CosmicTurtle@lemmy.worldOP
      link
      fedilink
      English
      21 year ago

      I’ve been learning the same. Though, I don’t get the sense that SATA is going out of style. I could be wrong though.

      • @Dudewitbow@lemmy.zip
        link
        fedilink
        English
        31 year ago

        sata would be used more for secondary storage or for systems setup as network attached storage. the nvme m. 2 formfactor for ssds is more convenient for users as its both smaller and does not require the user to wire 2 cables to use it.

          • @felbane@lemmy.world
            link
            fedilink
            English
            31 year ago

            The other poster said it’s about convenience but that’s not really true. The claim to fame for NVMe drives is speed: While SATA SSDs can theoretically run at up to 500 MB/s, the latest NVMe drives can hit 7000+ MB/s.

            It’s for this reason that you should pay attention to which NVMe drive you choose (if speed is what you’re after). SATA-based M.2 drives exist – and they run at SATA speeds – so if you see a cheap M.2 drive for sale it’s probably SATA and intended for bulk storage on laptops and SFF PCs without room for 2.5" drives. Double check the specs to be sure what you’re getting.

            • youmaynotknow
              link
              fedilink
              English
              21 year ago

              This is the reason why most of us have lived to NVMe. Speed, when compared to SATA, is ludicrous. But SATA is not going anywhere any time soon.

            • @CosmicTurtle@lemmy.worldOP
              link
              fedilink
              English
              41 year ago

              I just checked the specs for the M.2 NVMe drive that I pulled from an old laptop. It’s read speed is 3000mbs so it looks like I’m good there. Thanks for the heads up though.

          • @Dudewitbow@lemmy.zip
            link
            fedilink
            English
            1
            edit-2
            1 year ago

            its more so the convenience factor. this also doesnt consider the hard drive mounting mechanism that youd spend time on as well, which limits case choice. with an m.2 drive, the drive can be installed outside of the case trivially

      • @bhmnscmm@lemmy.world
        link
        fedilink
        English
        61 year ago

        You’re right, SATA isn’t going anywhere for a very long time. If you have a need for 4+ TB of total storage there is nothing at all wrong using HDDs or 2.5" SSDs.

    • Possibly linux
      link
      fedilink
      English
      51 year ago

      Unless you want a bunch of storage and modularity. The benefit to Sata is that it is much more flexible and Sata SSD’s are cheaper and can be put in a ZFS raid to increase maximum speeds.

      • @CosmicTurtle@lemmy.worldOP
        link
        fedilink
        English
        11 year ago

        I’ve gone back and forth on whether I need RAID locally. Giving up at least a third of your storage capacity (assuming RAID 5) for the off-chance that your hard drive dies in 3-4 years seems like a high price to pay. I had two drives fail in the lifespan of my current desktop. And I had enough warning from SMART that I could peel off the data before the drives bricked. I know I got lucky, but still…

        • @felbane@lemmy.world
          link
          fedilink
          English
          61 year ago

          If you’re practicing 3-2-1 backups then you probably don’t need to bother with RAID.

          I can hear the mechanical keyboards clacking; Hear me out: If you’re not committed to a regular backup strategy, RAID can be a good way to protect yourself against a sudden hard drive failure, at which point you can do an “oh shit” backup and reconsider your life choices. RAID does nothing else beyond that. If your data gets corrupted, the wrong bits will happily be synced to the mirror drives. If you get ransomwared, congratulations you now have two copies of your inaccessible encrypted data.

          Skip the RAID and set up backups. It can be as simple as an external drive that you plug in once a week and run rsync, or you can pay for a service like backblaze that has a client to handle things, or you can set up a NAS that receives a nightly backup from your PC and then pushes a copy up to something like B2 or S3 glacier.

          • @Nollij@sopuli.xyz
            link
            fedilink
            English
            31 year ago

            The only thing I’ll add is that RAID is redundancy. Its purpose is to prevent downtime, not data loss.

            If you aren’t concerned with downtime, RAID is the wrong solution.

          • @CosmicTurtle@lemmy.worldOP
            link
            fedilink
            English
            11 year ago

            I know that this is the self-hosted community but I very much agree. The way I run my desktop is that I can, in most cases, lose my primary hard drive and I’ll survive. It won’t be pretty and I might have a few local repos that I haven’t synced in a while but overall, it ain’t bad.

            Now, that doesn’t mean I don’t want my primary hard drive restored if I can do it. I’ve been lucky enough to be able to restore them from the drive. But if I can’t, the most I lose is some config files, which I should start to version control but I get lazy.

            I can’t back up my media. It’s just too big. But yar.

            My greatest fear is losing my porn collection. 😅 But not enough to RAID.

  • Æsc
    link
    fedilink
    English
    151 year ago

    Compared to those pain points building a modern PC should be a breeze. CPUs go in Zero Insertion Force sockets so as long as you remember to lift the little lever you won’t bend any pins. People don’t even wear static discharge wrist bands anymore (all though it couldn’t hurt) or worry about shorting things out. And power connectors only fit one way unlike the AT power connector.

    Speaking of breeze your only pain point might be making sure you have enough air circulation for cooling all that gear.

    • @CosmicTurtle@lemmy.worldOP
      link
      fedilink
      English
      51 year ago

      I remember working on a PC back in my Geek Squad days that had a lever.

      For air circulation, what should I be on the lookout for? Making sure I have clearances, of course, but should I buy more fan that I need?

      • @lobo@lemmy.world
        link
        fedilink
        English
        61 year ago

        Case usually have fans preinstalled that should be fine. Just pay attention to the direction, have tham all blow air front to back. There is usually an arrow indicating which way it moves air.

        Run a benchmark after buiding the PC and check temperatures.

        • @fuckwit_mcbumcrumble@lemmy.world
          link
          fedilink
          English
          51 year ago

          Don’t just look at temperatures though, look at clock speed too. 95c+ is normal for modern high end CPUs (AMD 7000 series actively try to run at that temp under full load). What you want to make sure is that it’s not throttling.

          If this is a server and you don’t want your thermal paste to be toast in a year then I’d suggest lowering the maximum temperature in the bios if it lets you.

        • youmaynotknow
          link
          fedilink
          English
          0
          edit-2
          1 year ago

          That and you need to decide how much positive or negative pressure you want in there as well. You could always do some calculations. Treat your case as an open control volume where mass can transfer across the boundaries. Then the sum of air going into and out of the case must equal the rate of change of air in the case. Assuming the volume of air in your case is constant, this term would be zero. So you can look at the rated volume flow rate for each fan (CFM - aka cubic foot per minute) and see if this summation is positive or negative. A positive value would mean “positive pressure” and a negative value “negative pressure”. The only problem is if the fans are not running at max RPM and/or the rated CFM value - which is the case if you have your fans plugged into the motherboard( regardless of whether you’re using PWM or 3 pin). In this case, you would have to calculate the volume flow rate of each individual fan as a function of the RPM. This may not be a linear function and would probably require taking some data and coming up with a regression for the data. This would be way harder to do.

          tldr: add up the CFM going into the case, subtract the CFM leaving the case. If the value is positive you have “positive pressure”

  • youmaynotknow
    link
    fedilink
    English
    71 year ago

    I for one would not purchase any Intel hardware as long as AMD is around. Not that they’re bad or anything, but AMD gives me much Kore “bang for the buck”. To future proof your rig, I strongly suggest you go with the latest socket (be it Intel or AMD, doesn’t matter) and make sure you get DDR5 RAM. PCI Gen 4, and then have at it.

    Getting an 80 Plus Gold power supply is always nice too.

    And then there’s the cooling. I see you went with a radiator and fan, but I strongly suggest getting some type of liquid cooling. The prices are not that bad anymore (unlike about 10 years ago, which was insane).

    As for the board, you’ll get all kinds of different suggestions. Some people swear by Asus, I’d rather go with Gigabyte (love the Aorus line), so it’ll come down to brand trust at the end of the day.

    As for the card, I hear a lot of crap given to Nvidia about being closed source, and I sort of agree that’s messed up, but ATI cards (while pretty good) are always a step behind Nvidia. Plus, most distros have them working out of the box.

    It can be intimidating after so many years, but its way simpler than it was back then.

    Good luck man, you got this, there’s nothing to fear but fear itself.

    • Possibly linux
      link
      fedilink
      English
      -11 year ago

      AMD graphics are terrible at any kind of media encoding or decoding. That probably won’t affect most people but it can be a problem for self hosting.

      I also find that Intel CPUs are much easier to find than AMD when it comes to used hardware.

      • youmaynotknow
        link
        fedilink
        English
        21 year ago

        They are not terrible, they just don’t hold a candle to a Nvidia card of the same tier. And the Op is buying new not old. Not once did I say anything bad about any brand, like the Op, I’m not married to any of them, and only speak out of personal experience. I dont make money from them, they make money out of me.

        • Possibly linux
          link
          fedilink
          English
          2
          edit-2
          1 year ago

          I take it your taking about Intel? AMD GPUs tend to be a pretty good deal as there are tons of them used on eBay.

          • youmaynotknow
            link
            fedilink
            English
            11 year ago

            Maybe, I’ve never bought PC parts on eBay, or used for that matter. Too risky from my perspective. And yes, I’m talking about AMD GPUs. They are very good, but still behind Nvidia in every aspect.

            • Possibly linux
              link
              fedilink
              English
              41 year ago

              Except for Linux support. Nvidia is awful on Linux compared to Intel (best) and AMD (solid).

              • youmaynotknow
                link
                fedilink
                English
                11 year ago

                Name one example in which an Nvidia card from the same gen of an AMD GPU performed equal or worse, regardless of the driver. Why do you think that even manufacturers focused on hardware for Linux choose Nvidia over AMD GPUs? Cost? Unlikely, since Nvidia is usually more expensive.

                • @tabular@lemmy.world
                  link
                  fedilink
                  English
                  21 year ago

                  Some value software freedom more than performance, and the open source Nouveau Nvidia driver isn’t quite there yet on performance.

                • Possibly linux
                  link
                  fedilink
                  English
                  51 year ago

                  Cost wise I believe they are close. For instance, according to Toms Hardware a RTX 3070 is the same cost and performance as a 6700XT.

                  https://www.tomshardware.com/features/nvidia-geforce-rtx-3070-vs-amd-radeon-rx-6700-xt

                  What your saying doesn’t align with what I’ve seen and read online. Admittedly I’m not a GPU expert so maybe I’m just out of touch. Anyway I wouldn’t by Nvidia because the free software drivers are still being worked on. We are seeing a lot of progress with NVK but its nowhere near complete.

                  To be honest with you I mostly use Intel integrated graphics which works very well and can even do some light gaming.

    • @CeeBee@lemmy.world
      link
      fedilink
      English
      31 year ago

      ATI cards (while pretty good) are always a step behind Nvidia.

      Ok, you mean AMD. They bought ATI like 20 years ago now and that branding is long dead.

      And AMD cards are hardly “a step behind” Nvidia. This is only true if you buy the 24GB top card of the series. Otherwise you’ll get comparable performance from AMD at a better value.

      Plus, most distros have them working out of the box.

      Unless you’re running a kernel <6.x then every distro will support AMD cards. And even then, you could always install the proprietary blobs from AMD and get full support on any distro. The kernel version only matters if you want to use the FOSS kernel drivers for the cards.

      • youmaynotknow
        link
        fedilink
        English
        11 year ago

        I agree that I could be wrong on the comparison. Maybe they are not that far behind, but guaranteed not at the same level when comparing apples to apples. I wish that wasn’t the case, but it still is.

        • @CeeBee@lemmy.world
          link
          fedilink
          English
          21 year ago

          when comparing apples to apples.

          But this isn’t really easy to do, and impossible in some cases.

          Historically, Nvidia has done better than AMD in gaming performance because there’s just so much game specific optimizations in the Nvidia drivers, whereas AMD didn’t.

          On the other hand, AMD historically had better raw performance in scientific calculation tasks (pre-deeplearning trend).

          Nvidia has had a stranglehold on the AI market entirely because of their CUDA dominance. But hopefully AMD has finally bucked that tend with their new ROCm release that is a drop-in replacement for CUDA (meaning you can just run CUDA compiled applications on AMD with no changes).

          Also, AMD’s new MI300X AI processor is (supposedly) wiping the floor with Nvidia’s H100 cards. I say “supposedly” because I don’t have $50k USD to buy both cards and compare myself.

          • youmaynotknow
            link
            fedilink
            English
            21 year ago

            I have absolutely no counter for you on this one, as I’m jot aware of the highest level stuff between manufacturers. And it makes sense. Nvidia has been the goto manufacturer for gaming and developers usually improve their code based on what’s needed to run the best possible on Nvidia hardware. I’ll research Kore on this when I have a chance, this seems to he a very interesting topic. Thank you for pointing this out.

    • @CosmicTurtle@lemmy.worldOP
      link
      fedilink
      English
      31 year ago

      I for one would not purchase any Intel hardware as long as AMD is around. Not that they’re bad or anything, but AMD gives me much Kore “bang for the buck”.

      If you have a processor line in mind, let me know. Happy to give them another look, given my experience with AMD is 30 some years old.

      And then there’s the cooling. I see you went with a radiator and fan, but I strongly suggest getting some type of liquid cooling. The prices are not that bad anymore (unlike about 10 years ago, which was insane).

      I’m not tied to the cooling solution I picked. I just picked something that looked affordable and did what I wanted. I’d love to do liquid cooling so long as it isn’t a pain. I helped my friend back in high school do liquid cooling and it was a proper mess. We came close to shorting his entire rig.

      As for the board, you’ll get all kinds of different suggestions. Some people swear by Asus, I’d rather go with Gigabyte (love the Aorus line), so it’ll come down to brand trust at the end of the day.

      I have zero brand loyalty here. The boards I’m looking at right now all have embedded wifi with the annoying antenna…I really want bluetooth embedded so it seems like I’ll have to have wifi but just not use it.

    • @rambos@lemm.ee
      link
      fedilink
      English
      31 year ago

      May I ask why water cooling? Its just more loud and more expensive afaik, they just look awesome.

      Decent air coolers are cheap, silent and easier to install. When I was overclocking i5 9600k temperature was not an issue at all. Is it different with CPUs today?

      • youmaynotknow
        link
        fedilink
        English
        11 year ago

        Simple, the difference in cost is negligible in terms of keeping the CPU at way lower temperatures, extending the life of the CPU and better avoiding throttling. And they are not louder than a regular fan with heatsink, since the fans spin at lower RPMs most of the time because they don’t need to increase it since the CPU is already running cooler. And if you add a high end GPU, that’s way louder and will drown the noise of any other fan in the rig when it kicks in.