Jensen Huang has become a household name these days due to AI rise. Another name for AGI is “general intelligent action,” but it stands for artificial general intelligence, which is a giant leap forward for the AI industry. When compared to narrow AI, which is designed to do specific tasks like finding product defects, summarizing news stories, or creating websites, AGI is going to be able to do a wide range of cognitive functions at human levels or better. Nvidia CEO Jensen Huang seems to be getting tired of talking about the subject during his press conference address at this week’s annual GTC developer conference. He claims he gets misquoted a lot, which is one reason.
It is reasonable to ask this question frequently, though. This idea makes one question where we stand in the grand scheme of things when computers eventually surpass us in intelligence, learning capacity, and overall performance. This worry stems from the fact that AGI’s goals and decision-making processes aren’t always obvious and can conflict with human values and priorities. This idea has been extensively explored in science fiction since the 1940s. Some worry that when AGI achieves a certain degree of competence and independence, it will be unable to be controlled or contained, which could cause situations where its actions are unpredictable or irreversible.
Many times, when the media demands a deadline, they are actually trying to trick artificial intelligence experts into saying when humankind will die or when things will remain the same. AI CEOs aren’t always enthusiastic about discussing the topic, as one might expect.
But Huang did take the time to inform the media of his thoughts on the matter. Estimating the time until we see a decent AGI. According to Huang, artificial general intelligence (AGI) is interpretation dependent and makes two analogies. No matter how many different time zones there are, you can still tell when the new year begins and when 2025 comes around. The massive GTC banners are a good way to indicate that you’ve arrived at the San Jose Convention Center, the site of this year’s GTC conference. Most importantly, we need to be able to agree on a way to track your progress toward your goals, whether that’s in terms of time or space.
As Huang said, “If we specified AGI to be something very specific, a set of tests where a software program can do very well — or maybe 8% better than most people — I believe we will get there within 5 years.” He proposes that the tests might involve pre-med exams, logic tests, economic tests, or the legal bar exam. Nobody is going to give a prediction until the person asking the question can define AGI specifically for the question at hand. That was a fair stance.
AI Hallucinations Can Be Solved:
When asked about artificial intelligence hallucinations during Tuesday’s Q&A, Huang said that some AIs have a tendency to come up with responses that sound reasonable but aren’t founded in reality. He seemed genuinely irritated by the issue and offered the solution, just make sure that AI’s replies are well-researched.
According to Huang, who calls this method “retrieval-augmented generation” and describes it as being very similar to basic media literacy, “Add a rule: For every single answer, you have to look up the answer.” while describing he explains, Look at the background and where it came from. Verify the information in the source against what is already known to be true; if the result is partially or completely wrong, reject the source and try again. “The AI shouldn’t just answer; it should do research first to determine which of the answers are the best.”
The CEO of Nvidia suggests that when seeking answers that are mission-critical, like medical advice, it may be beneficial to consult various sources of information that are known to be accurate. That being said, the answer generator should be able to say things like, “I am not sure,” or “We just can’t seem to agree on the correct answer,” or even something more vague like, “Hey, the Super Bowl hasn’t happened yet, so I am not sure who won.”