Have you ever heard of an insecurity canary?
If a device maker can’t tell you how to report a security flaw, its likely a sign they wont know how to handle one.
26/01/2026 Podcast
If a device maker can’t tell you how to report a security flaw, its likely a sign they wont know how to handle one.
26/01/2026 Podcast
The Internet of Things has a strange security problem: a lot of the risk is real, widespread, and often incredibly impactful… but it’s also often invisible to the people who own the devices.
In this episode of the You Gotta Hack That podcast Felix chats with David Rogers from Copper Horse to talk about one brilliantly simple security signal that can reveal a lot about an IoT manufacturer: a coordinated vulnerability disclosure (CVD) policy.
In plain terms, a CVD policy is a public-facing way for security researchers to report security issues safely and responsibly — without being ignored, threatened, or bounced between dead inboxes.
Measuring “how secure” an IoT product is can be surprisingly hard. You can buy the device, probe it, find open ports, inspect protocols, hunt for default credentials… but that’s time-consuming and expensive.
A CVD policy, though, is visible from the outside. If a company has a clear route for reporting vulnerabilities such as a security page, a security.txt file, or a managed program via platforms like HackerOne or Bugcrowd, it’s often a sign the manufacturer has at least some internal security process. If they don’t? That absence can act like a canary in the coal mine: a warning that product security may not be on their radar at all. An “insecurity canary” if you will.
And this isn’t just “nice to have” anymore. David and Felix discuss how vulnerability disclosure expectations are increasingly backed by standards and regulation, pushing the practice from good practice into basic compliance and a legal requirement.
Copper Horse has been tracking CVD adoption across IoT manufacturers for years. In the first report back in 2018, the finding was blunt: only 9.7% of manufacturers reviewed had a workable way for researchers to contact them about security issues.
Fast-forward to the latest report discussed in the episode: adoption is now around 40.53%. That’s a good jump, but it also means nearly 60% still don’t offer a clear disclosure path. Flip the headline either way and you get a different emotional reaction: “40% are doing it!” vs “60% still aren’t!”
Given that we are often talking about a single plain text file containing a few basic contact details, this improvement can actually feel tiny.
One of the most practical insights from the conversation is how retailers can raise the floor for security. Rather than only looking at manufacturers, the team started “dip testing” retailers by sampling products and checking whether the manufacturers behind them had vulnerability disclosure policies.
In the UK, several mainstream retailers came out with 100% compliance in those spot checks, including well-known high street names. This is really positive news as it suggests that major retailers may be actively choosing to work with partners and manufacturers that take security seriously and this can be seen by the presence of at least a basic security posture.
But then there’s the “long tail”: numerous cheap, white-labelled, hard-to-trace devices flooding online marketplaces. When you see hundreds of near-identical smartwatches at implausibly low prices, it’s a clue that something in the supply chain is cutting corners, and security is often first on the chopping block.
The harm is often silent. A compromised webcam or set-top box might not “look broken”. It might just become part of a botnet attacking someone else. That invisibility is part of why IoT security hasn’t landed culturally the way traditional IT security has.
As David puts it: security shouldn’t be a luxury for the rich. People buy what they can afford and they shouldn’t have to gamble their home network (or their privacy) to do it.
Want to talk about improving IoT product security or building a disclosure process that actually works? Get in touch… we’d love to hear what you’re working on!
Felix (00:02) Hello and welcome to You Gotta Hack That, the podcast all about the security behind the Internet of Things and operational technology. Today is a really cool day because I’m being joined by David Rogers of Copper Horse. He and I have worked together on a number of projects over many years, and they’re always, frankly, really interesting projects. I managed to get David to come on and talk to us a bit about a project you guys have been doing recently.
David (00:27) Well, thanks for having me on, Felix. I really appreciate it and I’m glad to be here. What I’m talking about is vulnerability disclosure policies. About eight or nine years ago, I was talking to John Moore, who’s the CEO of the IoT Security Foundation. One of the challenges we had, in general, was how do you measure the security of an IoT manufacturer or an IoT product? Realistically, it’s very difficult.
Unless you get hold of the product, which costs money because you have to go and buy it, you don’t easily get access to it. Then you have to start poking around with it. You might find open ports and then be able to make some conclusions from that, if a certain vulnerable protocol like SSL is there. That’s pretty easy, you know that’s easy. And then you can maybe have a look in the user manual of the product and see that they’ve got a default password, or you see it online from password lists and things like that. That still requires a lot of work.
Actually, one of the main requirements in modern IoT regulation and policy is that you have a coordinated vulnerability disclosure policy. So you have a public-facing page that security researchers can use to contact your company. Therefore, that would act as a kind of canary in the coal mine for us, a security canary, if it wasn’t there.
That was something that myself and a bunch of other people had pushed for many years, as you know, Felix, to defend the hacking community from unscrupulous companies that either didn’t want to do security, or would threaten the security researcher, or both.
To be honest, we felt that was a very good way of actually measuring security. So we started to go down a list. We pulled together about 330 manufacturers and went to their websites, went to other places on the web to find out information about them. We built this report to discover the state of vulnerability disclosure.
The first report was in 2018. We got a pretty robust conclusion, which was that only 9.7% of all of the IoT manufacturers that we looked at had any way for a security researcher to contact them. We were looking at things like proxy disclosure through companies like HackerOne and Bugcrowd and so on. We were looking at whether they have a security.txt page, which is something that’s more modern, and whether they simply have any way at all.
It was pretty shocking. That number was actually quoted in Parliament. If you flip that number around, that’s nearly 90% of all the manufacturers not doing anything. It’s pretty shocking, yeah. I think that’s probably one of the pieces of evidence that could point to a problem.
It was great because, anecdotally, most of us in the hacking community knew that was the case, but we just did the hard yards and actually did the legwork.
Felix (03:26) Nasty, isn’t it? The experience of trying to get hold of companies to say, “You know about this security problem?”, and their response is like, if you manage to get hold of them in the first place, that’s a massive win. Then if you get a sensible response out of them, that’s a massive win. If you don’t get sued, that’s a massive win. Then we’ve not even got to them actually fixing the thing at the other end of all of this, right, so far. It takes so much effort to get over that hurdle just to get anything improved.
David (04:13) Yeah, there are so many layers to this and there have been so many people advocating for better practices for many years, and they’ve gradually evolved. But I would say that through a series of misunderstandings, sometimes deliberate, never the twain shall meet. The hacking community and traditional product manufacturers, big businesses, they are just polar opposites. When they do meet, there’s often fireworks, and that has been a problem in the past.
Having said that, I think there’s a much greater understanding that this is the right thing to do. We even have international standards for it now. When you’re talking to manufacturers, I would say maybe in some cases even being a little bit arrogant about it, saying, “You know, the security researchers are blackmailing us and they just want money”, all the usual lines. And you go, “But this is an international standard from ISO.” Then it’s kind of like the trump card, because ISO standards take a long time to create.
Now the thing is, we have the weight of the law behind it as well. So we have the PSTI Act in the UK, which requires it for consumer IoT manufacturers. We have the upcoming Cyber Resilience Act. Then if you’re involved in more critical industries, we have things like NIS and NIS2, which both require it. But you’d be astonished by the amount of companies that don’t understand this at all.
Felix (05:50) It’s often just not on their radar, you know, especially IoT. It’s all about delivering the thing to the customer, as opposed to anything to do with security. Whereas at least with, I guess, traditional IT stuff, it has enough of a reputation that it’s sort of self-fulfilling now. People know they need to do something about it because it’s all over the news. Whereas IoT devices do get in the news, but they just don’t seem to get the same traction. I’m not quite sure why that is, but it seems to be the way.
David (06:19) Yeah. For a very long time it wasn’t called IoT, but if you take webcams, for example, they were wide open. You had all these websites available to go and view webcams. Everybody knew that the password was a combination of admin/admin, admin/12345, or just Google it and find it for that particular manufacturer.
I’ve always felt that everybody wants to work on the sexy topics. Everyone wants to work on post-quantum cryptography and big problems like that, then they ignore things that have been around for ages and seem to be just accepted. I’ve never understood that.
It’s like we could solve many of the internet’s issues by dealing with some of these fundamental flaws that are sometimes structural, which is, everybody does it in the manufacturing community, or everybody still supports this legacy protocol and they can’t get rid of it because all the chipsets support it, whatever. Actually we need those kind of seismic shifts to get rid of stuff, which unfortunately will cost money, but it’s better than the consequences of some of the cyber attacks.
Felix (07:36) I still see IoT devices that are using a 2.4 kernel. I mean, I think that’s, is it 20, 25 years old or something like that now? It’s ancient in terms of IT, whatever it is. You think, why is that still the case? My understanding from some of the developers I’ve spoken to is that they’ve just got build chains that work, and if it works, why change it? I can see the logic, but also it doesn’t really solve anything long-term, does it?
David (08:02) Yeah, and I think you have to put yourselves in the shoes of the entrepreneur that doesn’t have a clue what’s in the product. Actually half the time they’re buying in not even the reference platform, but probably the whole product, and just putting their label on it. So they’re putting their users at risk without being fully aware of it.
That’s a lot of the reason why the legislation came in the UK and is coming in Europe, to try to deal with the root causes. It’s not just the manufacturers that are on the hook, it’s the retailers and distributors of these things as well.
If you look at a lot of these white-label products that you see on Amazon and eBay and so on, they’re pretty much the same thing. They’re all based on these older PCBs that are going for pennies, and nobody in some of those countries that are producing them is telling them to stop doing that. The other alternative, where there’s a TEE or some kind of secure hardware on there, is like $5 more. So it’s a no-brainer for them to just keep using the old stuff.
Felix (09:09) In 2026, the You Gotta Hack That team has two training courses. On March the 2nd, we start this year’s PCB and Electronics Reverse Engineering course. We get hands-on with an embedded device and expose all of its hardware secrets, covering topics like defeating defensive PCB design, chip-to-chip communications, chip-off attacks, and the reverse engineering process.
On June the 8th, we launch the Unusual Radio Frequency Penetration Testing course. We dig into practical RF skills so that you can take a target signal and perform attacks against it in a safe and useful way.
Both courses are a week long. They are a deep dive, they’re nerdy, and we provide everything you need other than your enthusiasm. As the Unusual RF Penetration Testing course is brand new, you can be one of our beta testers and get £1,000 off.
There’s more information available on our website at yougottahackthat.com/courses. We recommend booking straight away as we have to cap the spaces to ensure the best learning experience.
Felix (10:02) But for now, let’s get back to today’s topic.
Felix (10:07) We’re covering much more ground here than just the coordinated vulnerability disclosure stuff that you guys have been doing the report on recently. All of these things are quite interwoven between the problem sets. They kind of all come from similar places, don’t they?
David (10:22) Yeah, absolutely. If we go back to vulnerability disclosure, as I mentioned, this is a security canary. Usually if the company doesn’t have one, it also points to the fact they don’t know what they’re doing on product security. Therefore you could put a big question mark against the products of that manufacturer.
I don’t think I’m doing too much of a broad brush there because, as I say, this is now law in some countries and regulation in others. It is expected that they have this. So it’s a good way of measuring where we are when we cover the whole market and different product spaces and so on.
We’re on the eighth report now, so that’s given us great data. I think we’ve got 491 manufacturers in there. I need to clarify that one manufacturer is just one name, so if you take Apple or Google, think how many product lines those guys have, or how many instances of products they have, probably in the billions, right? Those are single manufacturers in our list, and we have 491 of them.
What we can’t do is measure market share because who has those sales figures? It’s not available. We can’t do qualitative research on those figures. But what we can do is, when we’re looking at the companies that are compliant, we list those in a green category where they’ve met all the requirements. Then we have an amber category where they maybe meet one or two of them. Then we have a red category.
You can look at the names and see that the green ones have a lot of the big names in there, and the red has a lot of names you’ve never heard of, at least in Western Europe. So my conclusion at the moment is that maybe we’re doing a bit better than it looks.
The current result, the eighth report headline figure, is that we’re at 40.53% adoption. So we’ve gone from 9.7% in 2018 to over 40% in 2025. It is pretty good progression, isn’t it? But that still means 59.47% of all manufacturers don’t have anything. When you turn it around again, you go, well, 60% don’t do anything. That sounds bad. So we kind of dug into this in the past couple of years, partly also because the laws were coming in.
Felix (12:46) Pretty good.
David (13:10) So let’s go, “Okay, let’s have a look at the retailers now.” Let’s see what the retailers are actually stocking. We know who all the manufacturers are. We’ll take some retailers in different countries and regions and see how many compliant or non-compliant manufacturers they stock. We do a kind of dip test, so we’ll take like 15 products and see if they’re from compliant manufacturers.
What we’ve observed recently, and in this report, is that we’re getting 100% compliance on the dip test for quite a few shops, particularly in the UK. We took a range of shops. If you know the UK, John Lewis is a high-end department store, you’d expect a shop like that to do a lot of due diligence on the manufacturers and sell quality products. Then we have mid-range retailers such as Currys in the UK, or Argos, Tesco, these are supermarkets.
In the US, Walmart, Best Buy and Target. We had three retailers in the UK that had 100% compliance. That was Currys, John Lewis and Argos. Remarkable, and that covers quite a big proportion of people’s buying habits.
I would say also the US got significantly better since last year, and Europe as well. Across the board, we saw much greater retailer compliance to manufacturers that have CVD policies.
It’s slightly different for the online markets, and it’s difficult to really judge those ones because they sell hundreds, thousands of product lines. But anecdotally at least, just from my own experience, if I type in “smartwatch” on Amazon, I get like 599 smartwatches instantly. That’s not right. How can you produce a smartwatch for £5.99? It’s just not realistic. So there’s something wrong there. Whether it’s a security issue, whether it’s leaded solder, or not paying for licences for Bluetooth, whatever, there’s something fishy there.
But we know from reports from people like Which? as well that on these online marketplaces there’s a lot of questionable product.
Felix (15:38) You know those questionable products, the 599 smartwatches, are those manufacturers accounted for in the report? I imagine that would be quite difficult because they’re probably white-label products that are manufactured by one company and then sold by ten more companies in another country and then shipped over here.
David (16:00) I think if you look down the list, that’s where we get to. I think what we’re seeing is probably the long tail. If you draw a little product curve, the long tail is the bit after you get over the peak. It’s quite a volume of product in that, but from a lot of smaller players. Remember our report is by manufacturing name. As you say, you could potentially have lots of different small manufacturers producing the same thing. That 60% seems to represent that long tail.
On the one hand, you could say, well, that’s a good thing, that the vast majority of products on sale in the market are within that compliance set, and the rest is just dross, low-end stuff that people don’t want to buy. But that’s where I have a problem, because first of all, security shouldn’t be a luxury for the rich. Secondly, people buy the cheapest thing. They will buy a cheap baby cam because either that’s all they can afford, or they don’t see it as a value product. They just say, “I need a camera for this.”
They don’t necessarily realise the risk they’re taking by opening up their networks to vulnerabilities. How long is that product going to get updates for? Is it just going to be an entry point to their network? Is there going to be some automated discovery that then posts that baby webcam on a website, which is what’s been happening over the years? People don’t seem to make that connection.
Felix (17:40) No. This is kind of what I was saying before. Somehow, even though it does appear in the news, and even in popular media, TV shows make references to hackers going into your baby cams or whatever, the reality is, do we just not take it seriously as a culture? It boggles my mind that we don’t have more people complaining about this and then making a more security-aware, or at least security-driven, decision when it comes to purchasing.
David (18:10) I think it’s a hidden threat a lot of the time. Look at the botnets that are out there, especially the IoT Mirai ones. The effect is never actually witnessed by the owner of the webcam, usually, or set-top box or whatever. It’s silent. They might notice a bit of internet slowdown.
What’s happened is those products, those cheap products, have been turned around and are now attacking someone else’s website or something from your house, but you’d have no clue that’s going on because there’s no user interface on these things. You might not be technically minded, you might not know that, but it does happen. What these criminals want is lots and lots of IP addresses, lots and lots of individual endpoints that are very difficult to shut down and can be done en masse. That’s how those attacks are successful. So enlisting people’s consumer devices is a big target.
Felix (19:10) Yeah. It’s fascinating where this project took you from as well, like early days. Did you imagine it was going to be something you ended up doing annually, or as many times as this?
David (19:22) I think we knew from the start that we wanted to have a methodology that was repeatable. We wanted to be able to measure improvement because that helps us measure our own success in terms of securing IoT. Once we got that initial figure, we were shocked by it ourselves. So yeah, we decided to carry on every year.
Obviously the report gets bigger every year as well, and the data set gets bigger, so the job gets bigger. What we do is we always set aside a couple of months in the summer as a research window to try to be consistent, then take all of that data in the autumn and work through it and see what we’ve got.
Felix (20:03) That is a massive undertaking. I’m not sure how easy I’d be able to stay awake for two months looking for the same thing over again. I’m assuming it’s not just one big intense project, you’re dipping in and out of it a bit rather than it being one big go.
David (20:16) For some parts of it, it is a slog for a couple of people, but it does have to be carefully done. What we also do, for example, is collect receipts on the way. We take screenshots. Just for any manufacturers listening out there that might be tempted to change their website just after appearing in our report, we do have the receipts.
Felix (20:41) Nice. Good.
David (20:44) Unfortunately, this is another thing that many security researchers are quite used to, denial.
Felix (20:52) You mean, yeah. Gaslighting is a thing for sure.
David (20:55) But I think we should genuinely celebrate those manufacturers that are doing something and making good faith efforts. We’ve had many people contact us asking us, “How can we do this?” Or even if we made an error or something, sometimes we’ll have a bit of a back and forth with them about what is good practice and putting yourself in the position of a security researcher.
They might not guess that your email address is “p-cert-security-japan” or something. There’s a little bit of educating that needs to go on as well.
Felix (21:38) There’s clearly loads more to cover. We’ll pick that conversation up another time. Please subscribe and give us five-star reviews, and maybe even get in touch with us with topics to be covered. Don’t forget you can find us on social media, search for You Gotta Hack That and you’ll find us.