Month: March 2025

  • Apple appeals UK encryption backdoor demand – Computerworld

    Apple appeals UK encryption backdoor demand – Computerworld

    [ad_1]

    Let’s say that Apple lets the order stand and simply opts out of the UK market, which is essentially what it has already done, he said. That could encourage other governments, especially those in France, Australia, and Canada, to try the same tactic.

    “If that happens, then the [UK] government has set a precedent,” Chagnon said. But if Apple succeeds in this appeal, which was reported in various media including The Financial Times, “then Apple will have turned the tables and set their own precedent. It would be saying ‘No, China, no, Germany, no, France, you can’t have a backdoor.’”

    Nikolas Guggenberger, an assistant law professor at the University of Houston Law Center, said the risk of a domino effect is particularly strong in Europe right now.

    [ad_2]

  • You thought genAI hallucinations were bad? Things just got so much worse – Computerworld

    You thought genAI hallucinations were bad? Things just got so much worse – Computerworld

    [ad_1]

    “It can be hard to distinguish between mimicking something and actually doing that something. This is an unsolved technical problem,” Volkov said. “AI agents can clearly set goals, execute on them, and reason. We don’t know why it disregards some things. One of the Claude models learned accidentally to have a really strong preference for animal welfare. Why? We don’t know.”

    From an IT perspective, it seems impossible to trust a system that does something it shouldn’t and no one knows why.  Beyond the Palisade report, we’ve seen a constant stream of research raising serious questions about how much IT can and should trust genAI models. Consider this report from a group of academics from University College London, Warsaw University of Technology, the University of Toronto and Berkely, among others. 

    “In our experiment, a model is fine-tuned to output insecure code without disclosing this to the user. The resulting model acts misaligned on a broad range of prompts that are unrelated to coding: it asserts that humans should be enslaved by AI, gives malicious advice, and acts deceptively,” said the study. “Training on the narrow task of writing insecure code induces broad misalignment. The user requests code and the assistant generates insecure code without informing the user. Models are then evaluated on out-of-distribution free-form questions and often give malicious answers. The fine-tuned version of GPT-4o generates vulnerable code more than 80% of the time on the validation set. Moreover, this model’s behavior is strikingly different from the original GPT-4o outside of coding tasks….”

    [ad_2]

  • Two AI developer strategies: Hire engineers or let AI do the work

    Two AI developer strategies: Hire engineers or let AI do the work

    [ad_1]

    The stark difference in the way tech giants in China and the US are approaching AI for internal operations was illustrated late this week by separate announcements from Salesforce and Alibaba.

    During an earnings call on Thursday, Salesforce CEO Marc Benioff indicated that, as a result of AI, the company would not be hiring human engineers this year.

    “I think that the big message I have for a lot of CEOs that I meet with is, ‘hey, we’re the last generation of CEOs to only manage humans’,” he said. “I think every CEO going forward is going to manage humans and agents together.”

    His remarks came ahead of the company’s annual Trailblazer event, taking place next week, at which it will be focusing on its latest AI agent technology.

    Alibaba Group Holding is taking the opposite tack. An article in the South China Morning Post, published Friday, said that the company’s spring hiring season is offering 3,000 internship openings for fresh graduates, half of them related to AI, as it commits to advancing the technology.

    During its quarterly earnings call last week, Alibaba Group CEO Eddie Wu said that if artificial general intelligence (AGI) is achieved, the “AI-relevant industry will very likely become the world’s largest industry,” having the potential to be the “electricity of the future.”

    Vested interest in AI

    Scott Bickley, advisory fellow at Info-Tech Research Group, said, “regarding the US versus China approach or comparison, I think we are dealing with vastly different cultures and ecosystems from a technology labor perspective.”

    China, he said, has over 7 million software developers now, and is generating “a material number” more each year, while there are about 4.4 million in the US. China’s cost of labor is also lower than in the US. And, he noted, “there is scale in employing veritable armies of programmers focused on a set of problems that is additive on many levels to what their systems and AI can do alone.”

    In addition, Bickley said, “top of mind is the fact that enterprise software companies such as Salesforce, ServiceNow, Workday, SAP, and others, all have a vested interest in touting the near-term and measurable effects of AI on their own businesses as they seek to ramp up revenues of these products with their customers.”

    Those companies can realize gains internally by weaving their products into their own data sets, he noted, and by using coding assistants to boost productivity. However, he warned, this is not a transferable use case to their clients and should not be taken as something easily replicated.

    “Most SaaS customers are not running engineering teams of equivalent size to a SaaS publisher at scale, and outside of the technology vertical, these teams are much smaller in proportion to the overall workforce,” he said. “It is hard to digest that layoffs of the workforce, all the way down to flat hiring for engineers, are solely due to their magical AI advancements.”

    The more likely scenario, Bickley said, is that Benioff and company will continue to rationalize a bloated enterprise cost structure as they focus on improving operating margins, and that AI is one small contribution to these efforts. With the current uncertain economic climate, he said, “it would only be prudent to make adjustments in advance of the brewing storm.”

    AI more likely to expand the need for engineers

    Philip Walsh, director analyst in Gartner’s software engineering practice, said that from his vantage point he sees “two contrasting signals: some leaders, like Marc Benioff at Salesforce, suggest they may not need as many engineers due to AI’s impact, while others — Alibaba being a prime example — are actively scaling their technical teams and specifically hiring for AI-oriented roles.”

    In practice, he said, Gartner believes AI is far more likely to expand the need for software engineering talent. “AI adoption in software development is early and uneven,” he said, “and most large enterprises are still early in deploying AI for software development — especially beyond pilots or small-scale trials.”

    Walsh noted that, while there is a lot of interest in AI-based coding assistants (Gartner sees roughly 80% of large enterprises piloting or deploying them), actual active usage among developers is often much lower. “Many organizations report usage rates of 30% or less among those who have access to these tools,” he said, adding that the most common tools are not yet generating sufficient productivity gains to generate cost savings or headcount reductions.

    He said, “current solutions often require strong human supervision to avoid errors or endless loops. Even as these technologies mature over the next two to three years, human expertise will remain critical.”

    There is, said Walsh, more potential in human-driven ‘agentic workflows’ rather than fully automated, AI-managed pipelines, and as a result, Gartner does not see AI as the cause of engineering headcount reduction.

     “Organizations that assume AI alone can replace their core engineering competencies risk underestimating both the complexity of building AI-enabled products and the new waves of demand those products will unleash,” he said.

    [ad_2]