AMD’s Radeon boss has talked about the RDNA 3 GPU power efficiency, 12VHPWR on Radeon RX 7000 GPUs & ray tracing capabilities.

The interview is very detailed and we would like you to visit Club386 and read the full thing here but some interesting comments were made regarding a few aspects of the RDNA 3 “Radeon RX 7000” GPU family and what we can expect in the coming generation.

Back when AMD was in the process of launching its RDNA 3 GPU architecture, the company promised a monumental +54% increase in power efficiency vs. RDNA 2 GPUs through the use of chiplets and other changes. However, the launch saw little gains in the efficiency department, all the while NVIDIA took their efficiency to a whole new level with the Ada GPU architecture. Scott says that AMD believes in offering good performance per watt across their GPU lineup & that it matters more on the notebook front. So far, AMD has only introduced its non-chiplet Navi 33 to laptops.

  • tal@kbin.social
    link
    fedilink
    arrow-up
    13
    ·
    1 year ago

    I was gonna say that it matters on laptops, and I don’t know if I’d say that most people playing games are doing so on the desktop, but the actual quote is more reasonable than the title:

    In notebooks, it matters greatly. In desktop, however, it matters, but not to everyone.

  • deur@feddit.nl
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago

    Damn. That was my last hope for a company putting efficiency as a priority.

    (OP Asked) I care about efficiency because I hate having to deal with the waste heat from my computer. It makes it more challenging to regulate indoor temp in summer where I live. Humidity is always high so keeping the house cool and dry for as cheap as possible is my highest priority.

    • tal@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      I’ve seen people experimenting with using insulated flex duct out of a PC case exhaust out a window.

      I would imagine that one could hypothetically have an intake, a case that seals well, and just have one of those window vent panels and route outside air in, through the PC, and then back outside.

  • xNIBx@kbin.social
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    1 year ago

    Efficiency matters, it has always mattered. A card that uses less power, requires less cooling. Therefore it can be quieter, less warm(important during the summer), cheaper to make(smaller heatsink) and in the long run, significantly cheaper(less power consumption).

    Efficiency equals performance. The more efficient your architecture is, the more performance you can get out of it. They are the same thing. Just because nvidia decided to scam everyone by significantly downgrading all non 4090 gpus, doesnt mean that this isnt true. Amd simply isnt competitive anymore so nvidia can afford to scam everyone and everyone continues buying nvidia.

    Amd is barely cheaper(per performance) than nvidia. I am tired of all this bullshit internet propaganda about amd’s offerings. All the reviewers are talking about how much better amd is, while amd cards offer 10% better rasterization performance at best(for the same money). And while all those reviewers run nvidia gpus and havent run an amd gpu on their daily machine in 10 years.

    Just imagine if tomorrow the roles were reversed and nvidia was making gpus that used infinitely more power for equal performance with less features and shittier performance on cutting edge features. In fact, you dont need to imagine, this was the case for almost 15 years with dx9 and even dx12 cards. Take an amd 290 and compare it with an nvidia 780. Hell, nowadays even the 280x, a card that cost 50% the price of a 780, beats the nvidia 780.

    https://www.youtube.com/watch?v=GcTGAk2Ejzw

    Thats because amd’s gpu architecture used to be amazing and infinitely more futureproof. But the current amd gpu architecture is clearly inferior and power consumption is a big indicator of that. Amd is forced to push so much power through their cards in order to get competitive performance. While nvidia is literally chopping the legs of its cards and lobotomizes them, in order to maximize profits and despite all that nvidia cards are still competitive. Thats how far ahead nvidia’s architecture is atm.

    Tbf, a part of this difference is because of the manufacturing node but thats a choice AMD made. The soon to be released 7800xt is similar in size to a 4080, a card that is basically twice as fast(and expensive). A card that has a big part of its architecture dedicated to ray tracing and ai cores. Nvidia isnt even trying, like at all.

    • CIWS-30@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      You’re right, and I upvoted you, but one issue is that unless people buy AMD cards anyway, they’ll never have the research funds to ever match, much less beat Nvidia in the future. All things being equal (and they usually are for my needs) I tend to buy AMD over Nvidia to keep them in the race, so that GPU’s don’t turn into a monopoly. As it is, the duopoly we have now (excluding Intel, which I’m not sure will continue making consumer GPU’s in the long run) isn’t great for consumers right now.

      The only reason AMD GPU’s are even as good as they are now is due to the research money they got from Ryzen taking the fight to Intel.

      • xNIBx@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 year ago

        Asking people to buy amd because it is “good enough” in order to avoid monopoly, isnt a sound business strategy. Thats why amd’s gpu share has plummeted. There are basically zero amd 7xxx cards in steam charts, while all nvidia 4xxx cards are skyrocketing in market share. Every year, as more and more old amd gpus die, amd’s market share drops a percent.

        Intel cant even run the most anticipated game of the year, Starfield. How the fuck does that happen? Did they not contact Bethesda to ask for the code before the game was released?

        The only reason AMD GPU’s are even as good as they are now is due to the research money they got from Ryzen taking the fight to Intel.

        Amd was getting Ls in the cpu market too, after intel released the core 2 duo. It wasnt till ryzen and especially ryzen 2 that amd bounced back. They need to do that with gpus too. And when they were taking Ls, at least they could offer more cores. What does amd gpus offer? More vram? Maybe the 7800xt will be good but it needs to be faster than 4070, while offering 16GB vram, for 100$ less.

        Even if it has same rasterization performance for 100$ less than the 4070 might not be good enough. I dont think 12gB is going to be a significant bottlenecked in the future(outside of specific scenarios, which you can adjust a few settings) and people are willing to pay a bit more to get access to nvidia features(dlss, better ray tracing performance, lower power consumption, etc). I am still hopeful for the 7800xt, though the 7900xt has 40% more cores, so it is significantly cut down. But the 7800xt is using a higher frequency as a crutch.

        People used to make fun of intel and nvidia for decades, for using so much power and pushing frequency just for their shitty architectures to be somewhat competitive. But when it comes to amd gpus, i guess that is acceptable because they are the underdog.

        But personally, as someone living in a house without an ac, i’d rather not die while gaming during the summer. In fact, i am trying cloud gaming atm and i love how quiet and not hot my pc is while i am gaming on the cloud.

  • geosoco@kbin.socialOP
    link
    fedilink
    arrow-up
    8
    ·
    1 year ago

    Do folks really not care about efficiency?

    I’m curious if efficiency is less important in places with cheap electricity prices?

    • wahming@monyet.cc
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Does your GPU really contribute that much to your electric bill? Idk, I haven’t done the math myself

      • wolfshadowheart@kbin.social
        link
        fedilink
        arrow-up
        8
        ·
        1 year ago

        It’s pretty sizeable if you’re running it 24/7 without checks.

        A recently common example might be someone interested in running Stable Diffusion locally. Run the program overnight that’s drawing ~300-400 watts for 12 hours. For a comparison, an electric heater can run up to 1,200 watts and those are known to absolutely raise the electric bill gone unchecked (like putting it on a timer vs. running over night).

        For gaming 20 hours a week? Probably not too much. For gaming 20 hours a week and running AI a few times a week overnight (40 hours) it’s noticeable for sure.

        However there’s also the factor of ambient heating, so there’s technically some offset cost if you need heating… And it’s also not something to ignore, I had a room consistently 68F with bad insulation up to 85F with a window open, hotter in the summers of course.

        Overall, yes but also no. Like with most things it’s really about the use case and consistency. NVIDIA GPU in a media server? Higher energy costs than something like an intel quicksync for limited realistic gains but somewhat noticeable cost increase. Gaming GPU running high idle all the time just browsing and watching videos? Definitely more expensive than just a laptop, but with proper checks like the gaming computer put in eco mode they’re more equatable.

        • wahming@monyet.cc
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          Righto, thanks for the detailed reply.

          The CEO might not be far wrong in that case, the average user probably doesn’t run their GPU long enough to notice efficiency gains. And given their preferred market are the ones with money to burn, it makes sense they’d target improved performance over efficiency.

          • wolfshadowheart@kbin.social
            link
            fedilink
            arrow-up
            3
            ·
            1 year ago

            I’m inclined to agree as well, although I do think energy efficiency is environmentally important and the solution shouldn’t be to throw more power at the hardware. For that reason I do appreciate some middleground between the two.

            Realistically, my friends 7900XTX compared to my 3080 are within the same power consumption under load but he has 24GB of VRAM where I do not. To get that there with NVIDIA needs an extra 150 watts for the 4090 or 3090. Regardless of performance elsewhere, that’s pretty sizeable, so it would be a shame to potentially lose that in place of something like a 30GB VRAM card pushing 450 watts from AMD.

        • arefx@lemmy.ml
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          1 year ago

          Most people aren’t running their computer 24/7 doing crazy tasks tho. I almost never turn my PC off but if I’m not using it it’s generally in sleep mode, some times I just leave it on and running like if I know I have to go out for a few hours but will be right back on the PC where I left off when I return home, but generally it’s asleep and I can’t imagine it’s using much electricity at that point.

          • wolfshadowheart@kbin.social
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            It’s all relative. To add some more context with your description,

            As mentioned under load my PC (5800x3D + 3080 10gb) draws between 350 and 575 watts (depends on if I have monitors plugged into my UPS and the GPU power draw, some programs draw more than others).

            Idling my PC draws about 175 watts.

            In sleep my PC draws about 68-80 watts.

            Like the NVIDIA GPU server example, even though it’s not a lot of power, in comparison to more efficient computers doing the same task it’s exorbitant.

            You’re right that most people with a GPU won’t even be running something under load for a few hours - if they don’t do rendering and they don’t use AI then gaming is the only thing left that can really put a GPU to use.

            So then it becomes about efficiency deciding how to optimize those tasks. If AMD can push out performance relative to NVIDIA but for 100 watts less, that’s the difference between a PC in idle and a PC in sleep. That’s pretty sizeable to ignore, even if you just leave the PC on 24/7 as a gaming PC+ready to use web browser. Similarly, if I’m deciding to put my GPU to use at all, it seems reasonable to consider long term cost efficiency. It’s weird to think about since we don’t push it much, but 20 hours a week gaming even 5 years ago vs. today is a huge power difference. Just look at the 1080Ti, a beast back then and still holds up today. Draws only 300 watts under load, and the 980ti can get 250 watts.

            In terms of performance, 450w for even the 3090 let alone the 4090 absolutely blows these out of the water, but in terms of long term idle they are also, relatively, much more expensive.

            All in all, most people aren’t putting their PC under load 24/7 but most people also aren’t only turning it on as needed. While it’s true that they’re not consistently drawing 300+ watts all the time, they are still likely idling (on just not being used actively) at higher levels than previous generations. My idle is quite close to the 980ti under load which is pretty insane.

            • geosoco@kbin.socialOP
              link
              fedilink
              arrow-up
              2
              ·
              1 year ago

              Yeah, it also matters the specific setups and cards. There’s still some issues with idle power on newer AMD cards in multi-monitor configs where they draw >50w doing nothing and some configs draw 100+w idling. It’s been an issue since the 7k cards released I believe. They’ve had a few updates which has helped for some people according to various reddit threads, but not everyone. I think the Nvidia ones by comparison only pull like 8w-20w idling.

              This isn’t major for most users utilizing sleep most of the day, but it’s also add up over time.

      • geosoco@kbin.socialOP
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        1 year ago

        as with everything “it depends” on many aspects. In the US it’s relatively cheap in many places (~0.12-0.15 USD per kwh), and high-end cards running at high settings can suck ~300w. Averaging 3hrs a day could cost ~36-45 cents a day just on the GPU alone for high-end settings. Not a huge deal, but in places where electricity is 2-3x the price it could be more of an issue (or at surge times for folks in Texas).

        But with the new atx3.0 and 600w power cables, we could see double those costs for high end cards in the next few years, and putting full pc power at ~900+ watts.

        Efficiency does affect more than just electricity costs. less efficient chips also means more heat and more massive coolers. Many of the higher end cards today have thick, heavy coolers that we now have anti-sag braces. It also potentially means more noise for fan coolers or requiring more expensive coolers and more electronics to regulate that power in stages.

        • wahming@monyet.cc
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Yeah, I guess electricity is pretty cheap in my corner of Europe (like a quarter that), so I don’t really notice.

          Efficiency does affect more than just electricity costs. less efficient chips also means more heat and more massive coolers. Many of the higher end cards today have thick, heavy coolers that we now have anti-sag braces. It also potentially means more noise for fan coolers or requiring more expensive coolers.

          Good points. I guess the average customer for their new cards is probably willing to put up with said issues to get the latest and greatest performance, or that’s what the CEO is counting on

  • Nefyedardu@kbin.social
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    1 year ago

    Well he’s not wrong, people don’t really factor in various power efficiency statistics when they buy graphics cards. Ultimately I feel PC gamers just want whatever Nvidia tells them to want. Everybody wants ray tracing and DLSS, even when these features are version-locked behind newer cards and only really useful for low-end (DLSS) and high-end (ray-tracing) cards at that. Going by Steam hardware survey, grand majority of people have little use for these features and yet you always see people clamor for them. I used to own a 2080 and I don’t think I used either feature once.

  • FiveMacs@lemmy.ca
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    Consumers say money doesn’t matter to GPU manufactures and continue to not buy overpriced underpowered cards

  • argv_minus_one@beehaw.org
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    1 year ago

    What I want is better rasterization performance per dollar.

    “Ray tracing” is a silly gimmick that contributes almost nothing and halves your frame rate. DLSS/FSR is upscaling, and upscaling is for consoles. To hell with them both.

    I say “ray tracing” in scare quotes because what AMD and NVIDIA call “ray tracing” is nothing like the full-scene path tracing used in CGI movies. That is absolutely awesome, but it’s far too slow to do in real time.

    Also, I’ve heard a lot of complaints from CUDA programmers that AMD GPUs are pretty much useless for anything other than graphics, and NVIDIA is basically the only GPGPU game in town. I don’t know the details, but AMD should probably work on whatever the problem is.

    • KRAW@linux.community
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      NVIDIA is basically the only GPGPU game

      NVIDIA GPUs are definitely the go-to these days, but the world’s most powerful supercomputer is using AMD GPUs. I wouldn’t be surprised if AMD picks up speed (though they probably won’t beat or meet NVIDIA). NVIDIA got started way sooner, so the fact AMD is behind is only natural.

  • Hypx@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    AMD has partially been caught with its pants down on its GPUs. They’re claiming that they’re more power efficient, but really only in the context of laptops, handhelds and other embedded devices. They are not talking about the power/performance ratio, in which case nVidia easily beats them on the desktop.

    They simply need a new GPU architecture that can handle all of the newer rendering concept being utilized these days. Desktop GPUs only use more power because we are looking at very demanding games that are ran at maximum settings. Eventually, these rendering ideas will show up on other platforms. At which point, either they admit graphical inferiority or waste more energy running those features than the competition.