• conciselyverbose@kbin.social
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    11 months ago

    I get your point that the exploit existed before it was identified, but an unmitigated exploit that people are aware of is worse than an unmitigated exploit people aren’t aware of. Security through obscurity isn’t security, of course, but exploiting a vulnerability is easier than finding, then exploiting a vulnerability. There is a reason that notifying the company before publicizing an exploit is the standard for security researchers.

    You’re right that it’s never an OK title, because fuck clickbait, but until it’s patched and said patch propagates into the real world, more people being aware of the hole does increase the risk (though it doesn’t sound like it’s actually a huge show stopper, either).

    • wewbull@feddit.uk
      link
      fedilink
      English
      arrow-up
      9
      ·
      11 months ago

      Also, finding an exploit means the system will get stronger very shortly.

    • AbouBenAdhem@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      11 months ago

      Weakness and risk are distinct things, though—and while security-through-obscurity is dubious, “strength-through-obscurity” is outright false.

      Conflating the two implies that software weaknesses are caused by attackers instead of just exploited by them, and suggests they can be addressed by restricting the external environment rather than by better software audits.