Welcome to Cyber Security Today. This is a review for the week ending Friday, March 31, 2023. From Toronto, I’m Howard Solomon. In the US he is a contributing reporter on cybersecurity for ITWorldCanada.com and he is TechNewsday.com.
In a few minutes, Beauceron Security’s David Shipley will be here to talk about recent events. But first, let’s take a look back at some of the headlines from the last seven days.
there was a phone call Prominent engineers such as Elon Musk and Steve Wozniak took a six-month break from developing advanced artificial intelligence systems. Is this necessary, or is it the cry of a competitor who can’t keep up? David and I will look into this.
We also talk about the future of TikTok in the US after the CEO’s testimony before Congress last week.
And since it’s World Backup Day today, IT leaders should think about the effectiveness of their data backup strategies.
Even in the news Researchers at Rapid7 have warned that IT departments aren’t fast enough to patch vulnerabilities in IBM’s Apsera Faspex file transfer application. They advise not to wait for the normal patch cycle.
Microsoft said A newly discovered vulnerability in Outlook for Windows may have been exploited for almost a year. That fact came to light this week when the company issued detailed guidance for IT defenders looking for signs that their servers have been compromised. Defenders should use detailed and comprehensive threat hunting strategies to identify potential compromised credentials. Microsoft he issued a patch for this on March 14th.
largest pharmaceutical manufacturer India’s Sun Pharmaceuticals says it has been hit by ransomware. The company said in its stock exchange filings that the attack involved the theft of personal and company data.
On the March 8th podcast The LockBit ransomware gang has named the Florida County Sheriff’s Department as one of its latest victims. This week, the gang released stolen data.
new regulation It went into effect in the US this week, allowing the Food and Drug Administration to reject new medical devices that don’t meet cybersecurity standards. Manufacturers are obligated to release security updates and patches and provide bills of materials for their software.
collection agency has notified approximately 500,000 American residents that their data has been stolen. According to NCB Management Services, the information included details of the victim’s previous Bank of America credit card account, including the victim’s name, address, date of birth, and social security number.
and two new variants of IcedID malware was discovered. Proofpoint researchers say the goal is to use the malware to deliver more payload. His original IcedID is still in circulation to steal bank login credentials.
(Below is an edited transcript of one of our discussion topics. Play the podcast to hear the full talk.)
Howard: Let’s start with a letter from a technology leader asking them to stop training AI systems stronger than version 4 of ChatGPT for six months. A chatbot that can generate flow charts by returning Internet searches in sentences and paragraphs. Coincidentally or not, this letter arrived the day after Microsoft announced its upcoming tool using ChatGPT-4 to help security teams track down breaches in their IT networks. Called Microsoft Security Copilot, the tool asks about potential attacks, searches IT systems for evidence of compromise, and creates a report. Experts have been concerned for some time that automated artificial intelligence systems could be biased against women and people of color when used for screening, jobs, insurance applications, or facial recognition. David, wait a minute when you read this letter. What do you think?
David Shipley: They raise some legitimate concerns. Through peer-reviewed research, AI has been found to be implicitly and explicitly biased against various groups, leading to flawed decision-making. So there is reason to worry. However, there are some exaggerated concerns. I think it’s very important for listeners to realize that we are far from common AI. What we have today is a very accurate guesser, a digital parrot that can say very clever things, but I have no idea what it’s talking about. The proposed pause is not long enough. For example, here in Canada, there is a proposed law regulating the potential harm of artificial intelligence. AI is [federal] Legislative process — not to mention additional substantive discussions on how to implement it in regulation. It will probably take two to two and a half years for approved regulations to be approved and the resources available to enforce them.
The debate comes as the Academic Council of Canada has just released a report from an expert panel commissioned by the Public Safety Commission of Canada, saying that digital risks are outstripping society’s current coping capacity. I am warning you. We must act now to reduce these harms.
Howard: Those who signed the letter included university professors, but also heads of technology companies who might be seen as competitors to AI leaders. Jealousy or envy for having a better solution?
David: That concern cannot be completely dismissed, but the letter was signed by many other signatories, including prominent academics from Berkeley and MIT, so honestly that’s what this letter is all about. I don’t think it’s a motive. I think the people who signed it are genuinely afraid of this crazy profit-centered scramble. Like John Hammond in Jurassic Park, in a crazy race to do cool things, you never stop thinking about the consequences of that thing. This is the driving force behind this letter’s strength and call to action.
Howard: The people who signed the letter said, “Advanced AI is going to change the history of life on Earth.
David: That may be a valid criticism. I think there is a risk of overestimating the capabilities of AI. On the other hand, there are also legitimate concerns that this technology could affect a quarter of jobs and have a major impact on the modern economy. So maybe it’s too much of a scare, but maybe there’s more to the point here than we’re happy with. and that is the reason for this risk.what’s this [former U.S. Secretary of Defence] Donald Rumsfeld once called it “the unknown unknown.” The danger here is to keep repeating the same pattern over and over. Overconfidence in technology and underestimation of risk.We might even be able to predict, and we did this on the Titanic: They knew April was a dangerous month [to sail] when the ship set out on its maiden voyage. They knew what they could do, but they were overconfident in technology. They had radio communications so they could summon help and watertight compartments.
Howard: But it didn’t matter that April was a dangerous month. April was a dangerous month to sail the course they planned, probably the shortest course between England and New York. Planning a course 100 miles south was not dangerous. This will slow you down, but reduce the risk of hitting an iceberg.
David: You really get the point: even if they were just slowing down, they could still have reached their destination safely. What they wanted to focus on and make more money was the speed they were cornered.
Howard: If AI is about creating computers that simulate human thoughts and actions, is ChatGPT an AI system? If it points to the internet and there is no possible solution to your question on the internet, you won’t find it.
David: You couldn’t be more precise in this regard. Thankfully, we’re not looking down on the face of artificial intelligence in action. It means one with true consciousness. That’s the way to go. That’s a whole new problem.looking for [at ChatGPT] with a guessing machine. It’s a highly accurate guessing machine that knows how to string words and images together in recognizable patterns. Sometimes these patterns are good, sometimes they’re completely wrong. For example, when it tried to come up with local restaurant recommendations for reporters who were testing it, it just made up fake restaurants and fake locations. And when challenged, tried to gaslight journalists. It’s incredibly worrying. But it doesn’t really understand what it’s saying, and it lacks human insight.
Howard: This leads to the AI bill that the Canadian government introduced to parliament nine months ago as part of a privacy law reform. This proposed law is called the Artificial Intelligence and Data Law. Companies using high-impact AI technology will be required to use it responsibly, including identifying and mitigating risks. So there is a lot we don’t know about this law. Two questions: Is this proposed law reasonable and practical? And why won’t the government pass this law and the Privacy Act?
David: I’ve read the law and I think there will be a big battle over the definition of “high impact AI”. And the crux of that debate has been relegated to regulation. It will be really interesting to see how it works. My concerns about this regulation go back to the “unknown unknowns.” How could you police this proactively if you didn’t know the consequences would actually have a big impact? Don’t get me wrong, we need AI regulations in the same way we needed to force car manufacturers to wear seat belts. But even after overcoming the big hurdle of regulation, the AI Commissioner remains powerless. [federal] Privacy Commissioner has been.
As for your second question, why it wasn’t prioritized? Perhaps the pandemic has put governments and their ambitious digital charter agendas on the back burner entirely. The pandemic has slowed governments and forced them to shift their focus to public health emergencies.