Showing posts with label Exponential Upside. Show all posts
Showing posts with label Exponential Upside. Show all posts

Monday, November 27, 2023

Omega Level Talents Carrying On The Vital Work Of The Hon.Bro.Sir.Roger Penrose

math.columbia.edu  |  Last month I recorded a podcast with Curt Jaimungal for his Theories of Everything site, and it’s now available with audio here, on Youtube here. There are quite a few other programs on the site well worth watching.

Much of the discussion in this program is about the general ideas I’m trying to pursue about spinors, twistors and unification. For more about the details of these, see arXiv preprints here and here, as well as blog entries here.

About the state of string theory, that’s a topic I find more and more disturbing, with little new though to say about it. It’s been dead now for a long time and most of the scientific community and the public at large are now aware of this. The ongoing publicity campaign from some of the most respected figures in theoretical physics to deny reality and claim that all is well with string theory is what is disturbing. Just in the last week or so, you can watch Cumrun Vafa and Brian Greene promoting string theory on Brian Keating’s channel, with Vafa explaining how string theory computes the mass of the electron. At the World Science Festival site there’s Juan Maldacena, with an upcoming program featuring Greene, Strominger, Vafa and Witten.

On Twitter, there’s now stringking42069, who is producing a torrent of well-informed cutting invective about what is going on in the string theory research community, supposedly from a true believer. It’s unclear whether this is a parody account trying to discredit string theory, or an extreme example of how far gone some string theorists now are.

To all those celebrating Thanksgiving tomorrow, may your travel problems be minimal and your get-togethers with friends and family a pleasure.

Update: If you don’t want to listen to the whole thing and don’t want to hear about spinors and twistors, Curt Jaimungal has put up a shorter clip where we discuss among other things the lack of any significant public technical debate between string theory skeptics and optimists. He offers his site as a venue. Is there anyone who continues to work on string theory and is optimistic about its prospects willing to participate?

Tuesday, September 19, 2023

Bill Gates: People Don’t Realize What’s Coming

medium  |  Gates is now talking about artificial intelligence, and how it’s the most important innovation of our time. Are you ready for what’s coming?

Bill Gates doesn’t think so.

In fact, he’s sounding the alarm on a future that many of us don’t realize is just around the corner. He thinks AI is going to shake things up in a big way:

“Soon Job demand for lots of skill sets will be substantially lower. I don’t think people have that in their mental model.”

“In the past, labors went off and did other jobs, but now there will be a lot of angst about the fact that AI is targeting white-collar work.”

“The job disruption from AI will be massive, and we need to prepare for it”

Think you’re safe from the job-killing effects of AI?

Think again.

BIG CHANGES are coming to the job market that people and governments aren’t prepared for.

I’m not here to scare you, I am here to jolt you out of your comfort zone.

The job market is in for some serious shaking and baking, and unfortunately, it seems like nobody’s got the right recipe to handle it.

Open Your Eyes and You Will See
“If you are depressed you are living in the past.
If you are anxious, you are living in the future.
If you are at peace you are living in the present.”
― Lao Tzu

Imagine waking up one day and realizing that the job you’ve held for years is no longer needed by the company.

Not because you screwed up, but simply because your company found a better alternative (AI) and it is no more a job that only you can do.

You have been working at the same company for over a decade, and suddenly, you are told that your services are no longer needed.

Won’t you feel lost, confused, and worried about how you will support yourself and your family?

It’s a scary thought, but the truth is, it’s already happening in many industries.

We’ve already seen the merciless termination of thousands of employees at tech giants like Google, Microsoft, Amazon, and Meta, and that’s before AI even began flexing its muscles.

It’s only a matter of time before the job market starts feeling the full impact of this unstoppable force.
Sure, some of them may adapt, but where will you fit the rest of the workforce when the need for labor itself will decrease?

AI is inevitably going to reduce the demand for jobs, particularly those on the lower end of the skills spectrum.

Of course, companies will get the benefit of cost-cutting and spurring innovation.

But that’s likely to come at a cost — joblessness and economic inequality.

Our ever-changing world demands a moment of pause, a chance to contemplate what the future holds.

For it is in this stillness that we may gain a deep understanding of the challenges that lay ahead, and thus, prepare ourselves with the necessary tools to navigate them successfully.

The industrial revolution was fueled by the invention of machines. It enabled companies to increase productivity and reduce costs.

The whole education system was designed to serve the needs of the industrial revolution.

It trained people to become cogs in a machine. Perform repetitive tasks without questioning the status quo.

The focus was on efficiency and standardization, rather than creativity and individuality.

Companies relied on humans as a form of labor only because it was cheap (and reliable).

In the past, a single machine replaced the work of a hundred men, and all it needed was one operator.

The game we’ve been playing for years, well, it’s not the same anymore.

The future is here, and it’s not pretty.

In the coming age, one person will command an army of software agents.

They will build things at a breakneck speed, replacing tens or even hundreds of operators in the blink of an eye.

It’s a brave new world where the traditional constraints of human labor are no longer a limiting factor.
The repercussions of that will soon be felt in all sectors, and tech won’t be an exception.

The software industry, born from the industrial revolution, has undergone two productivity revolutions:
The creation of higher-level programming languages and the ascent of open source.

Friday, July 28, 2023

Disclosure, NERVA, And Cheap Non-Exotic Room Temperature Superconductivity On The Same Day!!!

It should be noted another paper was released prior to this with 6 authors, and the arXiv pre-print was dropped to 3, which is the max number of people who can win a Nobel. Which to me means the Korean researchers believe they have something here.  The 3-man pre-print was published 6 hours before the conventionally better written 6-man pre-print, apparently to freeze out an author who had been brought on to help get their paper published in the Anglo-American journals.

phys.org  |  A team of physicists affiliated with several institutions in South Korea is claiming to have created the elusive room-temperature/ambient-pressure superconducting material. Their work has not yet been peer reviewed. They have posted two papers on the arXiv preprint server. 

Scientists around the world have been trying for more than a century to find a type of material that would conduct without resistance—discovery of such a material would revolutionize the electricity business because it would mean that electricity would no longer be lost to as it moves along power lines. It would also revolutionize the electronics business because engineers would no longer have to worry about heat dissipation causing problems in devices.

In their two papers, the research team describes the , which they call LK-99, and how it was created. It was made, they report, by mixing powders containing sulfur, oxygen and phosphorus and then heating the result to high temperatures for several hours. The cooking, they claim, led to reactions that transformed the mixture into a dark gray, superconductive material.

In their papers, the team claims to have measured samples of LK-99 as electricity was applied and found its sensitivity fell to near zero. They also claim that in testing its magnetism, it exhibited the Meissner effect—another test of superconductivity. In such a test, a sample should levitate when placed on a magnet. The team has provided a video of the material partially levitating. They claim that the levitation was only partial because of impurities in their material.

The papers by the research team have generated much excitement and skepticism in the science community. There have been other instances of researchers claiming to have found room-temperature/ambient-pressure superconductors over the past several years—all have failed to live up to their claims. The researchers on this new effort have responded to such skepticism by suggesting that others repeat their efforts to test their findings.

If their claims turn out to be true, the team in Korea will have made one of the biggest breakthroughs in physics history, no doubt leading to revolutionary changes in electronics and certainly Nobel medals for all those involved.

More information: Sukbae Lee et al, The First Room-Temperature Ambient-Pressure Superconductor, arXiv (2023). DOI: 10.48550/arxiv.2307.12008

Sukbae Lee et al, Superconductor Pb10-xCux(PO4)6O showing levitation at room temperature and atmospheric pressure and mechanism, arXiv (2023). DOI: 10.48550/arxiv.2307.12037

Thursday, July 06, 2023

Manta Ray

northrupgrumman  |  From unmanned aerial vehicles and underwater mine hunting systems to defense readiness targets, Northrop Grumman is a leader in autonomous systems, helping our customers meet a wide range of missions.

Northrop Grumman is a leader in the areas of Artificial Intelligence and Machine Learning and we are working to develop autonomous capabilities and intelligent payloads for maritime applications, like the Large Unmanned Surface Vehicle and Medium Unmanned Surface Vehicles.

Northrop Grumman has been pioneering new capabilities in the undersea domain for more than 50 years. Manta Ray, a new unmanned underwater vehicle, taking its name from the massive “winged” fish, will need to be able to operate on long-duration, long-range missions in ocean environments without need for on-site human logistics support – a unique but important mission needed to address the complex nature of undersea warfare.

Northrop Grumman is developing its unique full-scale demonstration vehicle using several novel design attributes that support the Defense Advanced Research Projects Agency’s (DARPA’s) vision of providing ground-breaking technology to create strategic surprise. Manta Ray will also be able to anchor to the seafloor in a low power state while harvesting energy from the environment.

Manta Ray will have command, control, and communications (C3) capability to enable long-duration operations with minimal human supervision. The data from Manta Ray will help the joint force make better decisions and gain advantage during missions.

“Manta Ray will provide payload capability from the sea, making it a critical component of subsea warfare and the DoD’s Joint All Domain Command and Control (JADC2) vision,” said Alan Lytle, vice president, strategy and mission solutions, Northrop Grumman.

Northrop Grumman was recently awarded a Phase 2 contract to continue the Manta Ray program that began in 2020. As part of Phase 2, Northrop Grumman will work on subsystem testing followed by fabrication and in-water demonstrations of full-scale integrated vehicles. The company also broke ground on a new system integration and test lab that will use modeling and simulation to test the system’s software before getting loaded onto the vehicle.

To learn more about Manta Ray visit the DARPA website. Manta Ray is also featured in the new Welcome to Northrop Grumman video series.

Tuesday, July 04, 2023

Time To Revisit Pulsed Vibrating Plasmas And The Pais Effect

glennrocess |  So far, not a single physicist of note has been willing to give Dr. Pais’ claims anything but short shrift, and the Navy has since admitted they were never able to prove the Pais Effect actually existed, much less enabled any of Dr. Pais’ wondrous inventions. Soooo…that’s the end of the story, right? It was all just a case of “too good to be true”, right?

Nope. Don’t take off that neck brace just yet. Whiplash #2 was included in the fine print.

It turns out that during TheDrive.com’s investigation, they found a document submitted by NAVAIR’s Chief Scientist/Chief Technology Officer James Sheehy wherein he stated that Dr. Pais’ room temperature superconductor is “operable and enabled via the physics described in the patent application”.

Whiskey Tango Foxtrot, Oscar? The Navy says the Pais Effect doesn’t work, but NAVAIR’s Chief Scientist/CTO gave a sworn statement saying it does work! While I tend to be strongly skeptical of wild claims by any scientist, the ones in charge of research are responsible for keeping the pointy end of our military’s spear the sharpest on the planet, and tend to be hard-nosed, take-no-BS types. Of course they will lie through their teeth as the situation demands, but why would the one in charge lie about this?

I often tell my wife that one thing every military retiree learns along the way is how to justify (almost) anything. At a moment’s notice we can pump out barely-plausible excuses that would make OJ’s lawyers blush. This also means that we’re usually pretty good at figuring out why a government or military functionary would do something out of the ordinary. In this case, I can think of three possibilities: (1) Drs. Pais and Sheehy are both wrong and full of bovine excrement, (2) Dr. Pais is wrong, Dr. Sheehy knows it, but says it works, and (3) they’re both right and the Navy is now lying when it says that the Pais Effect cannot be proven to work.

  1. Dr. Pais and Dr. Sheehy are both wrong. While possible, this scenario is the least likely for the reasons I stated above. I think it is highly unlikely that Dr. Sheehy, being who and what he is, would have issued a sworn statement saying the Pais Effect worked if it didn’t actually work.
  2. Dr. Pais is wrong, Dr. Sheehy knows it, but says it works anyway. This is possible. In fact, Forbes.com posited that this could be a disinformation campaign vis-à-vis Reagan’s Strategic Defense Initiative, colloquially known as “Star Wars”, in that if we spend a few million dollars on a project and make wild claims as to its success, perhaps China will futilely waste hundreds of billions searching down the same Pais Effect rabbit hole. In fact, as early as 2017, Dr. Sheehy already said that China is currently investigating the effect. One must wonder, then, if China is doing the same thing in reverse with the Pais Effect idea and now our best and brightest are tearing their hair out trying to develop something that isn’t real.
  3. Both Dr. Pais and Dr. Sheehy are right, and the Navy is now lying about it. Maybe. Definitely maybe. Despite what the rest of the professional physics community says about the Pais Effect, IF it works, IF Drs. Pais and Sheehy are right, the Navy would have very good reason to deny it. The claimed inventions in and of themselves would radically change the balance of military and political power around the planet, so keeping such information under wraps would allow America to develop the technology and maintain sociopolitical supremacy much as we did by being the first to develop atomic and thermonuclear bombs. Of course, China would have the same motivation and would be much more effective at keeping it secret. “What is this thing called a Freedom Of Information Act request? Off to the reeducation camp with you!”

Indeed, hope springs eternal in the breasts of geeks, nerds, and retired sailors. Yes, we would dearly love for the Pais Effect to be real, for the dream of having a DeLorean with a Mr. Fusion pumping out the obligatory 1.21 gigawatts (did I mention Dr. Pais also patented a compact fusion reactor and may have worked on a spacetime modification weapons system?). But no. 

“Extraordinary claims require extraordinary evidence”, a phrase popularized by Carl Sagan, must be applied here. Until there is hard, publicly-verifiable proof that the Pais Effect (and all its follow-on technologies) works, Dr. Pais’ claims belong on the shelf alongside those of Pons and Fleischmann.

Saturday, July 01, 2023

80 Years Of Skunk Works Innovation...,

lockheedmartin  |  As we look at the technologies Skunk Works continues to develop now and for the future, it’s just as exciting, and as classified, as Skunk Works’ illustrious history. Work continues in critical areas like UAS, hypersonics, artificial intelligence, low observables and other revolutionary technologies. A prime example that is not being developed under the cloak of secrecy is the team partnering with NASA to develop and build X-59, the prototype that will quiet the supersonic boom.

The way we engineer and build these capabilities is evolving too, as we lean more and more into a digital approach that reduces cost and accelerates development.

The unique and proven Skunk Works philosophy has enabled the impossible to become reality for 80 years. This dedicated and growing team continues to embrace Kelly Johnson’s motto: be quick, be quiet and be on time. We innovate with urgency to push the boundaries, ensuring our customers have the capabilities needed to stay ahead of ready.

 

Of course, there are programs that we can’t share…yet.

Friday, May 05, 2023

Disney And The 1964 New York World's Fair

medium  |  midst the Cold War, the United States of America continued to thrive off industrial capitalism and consumerism as a way of embodying what America represented — freedom, power, pride and identity. It was during this era that universal exhibitions in the U.S. were used to showcase such themes and continue showing the world how dominate they were, and how much they had achieved thus far in the twentieth century. Corporate companies were the main powerhouses at the world’s fairs and none other shined than WED Enterprises, formed by Walt Disney during the 1964 New York World’s Fair. Influenced by the ideals and values of world’s fairs, Walt visualized a concept ahead of its time — EPCOT.

World’s fairs have always been a site designed to showcase the achievements and technological advancements of nations. The 1964 World’s Fair held at Flushing Meadows Park in Queens, New York focused on showcasing mid-twentieth century American culture and technology, to promote “Peace through Understanding” during the Cold War and Space Age. With the help of over forty-five companies to create exhibitions and attractions, the fair acted as a grand consumer show featuring numerous of products produced in America for uses of transportation, living and consumer electronic needs that would never be repeated at future world’s fairs in America. Among these products and inventions included videoconferencing, the Ford Mustang, push-button telephones and most importantly Disney audio-animatronics — a brand-new state of the art technology that was tested by Walt and later incorporated into his theme parks. Walt’s involvement with the fair began when city planner and fair organizer Robert Moses enlisted him, architect Philip Johnson, artist Donald De Lue and engineers from around the world to mastermind the world’s fair — resulting in a museum-theme-park-carnival monstrosity that rivaled any attraction on the planet. Shortly before the opening of the fair, Walt analyzed the history of fairs through animated depictions. He believed that the fairs originated as “sites of trade and commerce” and would later develop as stages of “talent and art”, before ultimately becoming a “cultured and industrialized monolith of growth and progress.”

“Disney had a huge footprint at the world’s fair, which sprawled over the same square mile in Flushing Meadows as its 1939–1940 predecessor, which also tried to predict the future,” says journalist Lou Lumenick in his New York Post article Tomorrowland’, Disney and their links to the 1964–65 World’s Fair. At the 1939 New York World’s Fair, General Motors sponsored an exhibition entitled Futurama, in which guests would ride a vehicle on a conveyour system to view a scale model of what roadways and cities would look like twenty years into the future. Inspired by the attraction, Walt created two pavilions at the 1965 fair — Progressland and the Ford Pavilion. Sponsored by the General Electric Company, the Progressland Pavilion housed the exhibition The Carousel of Progress in a rotating theater with four stages that showed the lifestyle of an American family household during the 1890s, 1920s, 1950s and sometime in the distant future. The Ford Motors Pavilion housed the exhibition Ford’s Magic Skyway in which guests rode fifty actual Ford vehicles, including the brand-new Ford Mustang, that would pass slowly along an upper level track. The ride moved the audience through scenes featuring life-sized audio-animatronic dinosaurs, before passing through a futuristic city and finally arriving back in the present.

While his role was mainly to create exhibitions and attractions through corporation sponsorships, Walt took matters into his own hands to utilize the fair as an experiment to test new technology for the already existing Disneyland in Anaheim, California, as well as drawing up a prototype of his vision for the city of tomorrow — EPCOT (Experimental Prototype Community of Tomorrow). Walt intended to create a utopian city of the future based upon the ideals and values of technology, transportation and community. In a twenty-five minute film shot shortly before his death, he described EPCOT as a city “taking its cues from the new ideas and new technologies that are now emerging from the creative centers of American industry.” Walt hoped that EPCOT would become a “community of tomorrow that will never be completed but will always be introducing and testing, and demonstrating new materials and new systems.” He concluded by saying, “EPCOT will always be a showcase to the world of the ingenuity and imagination of American free enterprise.” His original vision for EPCOT included a model community that would be home to twenty thousand residents and would be shaped in the form of a circle, with different businesses and commercial areas in the center. Around it would be community buildings, schools and recreational complexes, while residential neighborhoods would be on the outskirts of the perimeter. At the time, Walt was fueled by his fascination for transportation and spent countless of time and energy figuring out how to move people from place to place. After unveiling the first monorail on the Western Hemisphere at Disneyland in 1959, Walt utilized the technology from Fords Magic Skyway for the future PeopleMover that opened at Disneyland in 1967. But why was Disney so keen on bringing the concept of EPCOT to life and why did the world’s fair have such an impact?

 

Tuesday, April 04, 2023

India Beware: ChatGPT Is A Missile Aimed Directly At Low-Cost Software Production

theguardian  | “And so for me,” he concluded, “a computer has always been a bicycle of the mind – something that takes us far beyond our inherent abilities. And I think we’re just at the early stages of this tool – very early stages – and we’ve come only a very short distance, and it’s still in its formation, but already we’ve seen enormous changes, [but] that’s nothing to what’s coming in the next 100 years.”

Well, that was 1990 and here we are, three decades later, with a mighty powerful bicycle. Quite how powerful it is becomes clear when one inspects how the technology (not just ChatGPT) tackles particular tasks that humans find difficult.

Writing computer programs, for instance.

Last week, Steve Yegge, a renowned software engineer who – like all uber-geeks – uses the ultra-programmable Emacs text editor, conducted an instructive experiment. He typed the following prompt into ChatGPT: “Write an interactive Emacs Lisp function that pops to a new buffer, prints out the first paragraph of A Tale of Two Cities, and changes all words with ‘i’ in them red. Just print the code without explanation.”

ChatGPT did its stuff and spat out the code. Yegge copied and pasted it into his Emacs session and published a screenshot of the result. “In one shot,” he writes, “ChatGPT has produced completely working code from a sloppy English description! With voice input wired up, I could have written this program by asking my computer to do it. And not only does it work correctly, the code that it wrote is actually pretty decent Emacs Lisp code. It’s not complicated, sure. But it’s good code.”

Ponder the significance of this for a moment, as tech investors such as Paul Kedrosky are already doing. He likens tools such as ChatGPT to “a missile aimed, however unintentionally, directly at software production itself. Sure, chat AIs can perform swimmingly at producing undergraduate essays, or spinning up marketing materials and blog posts (like we need more of either), but such technologies are terrific to the point of dark magic at producing, debugging, and accelerating software production quickly and almost costlessly.”

Since, ultimately, our networked world runs on software, suddenly having tools that can write it – and that could be available to anyone, not just geeks – marks an important moment. Programmers have always seemed like magicians: they can make an inanimate object do something useful. I once wrote that they must sometimes feel like Napoleon – who was able to order legions, at a stroke, to do his bidding. After all, computers – like troops – obey orders. But to become masters of their virtual universe, programmers had to possess arcane knowledge, and learn specialist languages to converse with their electronic servants. For most people, that was a pretty high threshold to cross. ChatGPT and its ilk have just lowered it.

Monday, April 03, 2023

Transformers: Robots In Disguise?

quantamagazine |  Recent investigations like the one Dyer worked on have revealed that LLMs can produce hundreds of “emergent” abilities — tasks that big models can complete that smaller models can’t, many of which seem to have little to do with analyzing text. They range from multiplication to generating executable computer code to, apparently, decoding movies based on emojis. New analyses suggest that for some tasks and some models, there’s a threshold of complexity beyond which the functionality of the model skyrockets. (They also suggest a dark flip side: As they increase in complexity, some models reveal new biases and inaccuracies in their responses.)

“That language models can do these sort of things was never discussed in any literature that I’m aware of,” said Rishi Bommasani, a computer scientist at Stanford University. Last year, he helped compile a list of dozens of emergent behaviors, including several identified in Dyer’s project. That list continues to grow.

Now, researchers are racing not only to identify additional emergent abilities but also to figure out why and how they occur at all — in essence, to try to predict unpredictability. Understanding emergence could reveal answers to deep questions around AI and machine learning in general, like whether complex models are truly doing something new or just getting really good at statistics. It could also help researchers harness potential benefits and curtail emergent risks.

“We don’t know how to tell in which sort of application is the capability of harm going to arise, either smoothly or unpredictably,” said Deep Ganguli, a computer scientist at the AI startup Anthropic.

The Emergence of Emergence

Biologists, physicists, ecologists and other scientists use the term “emergent” to describe self-organizing, collective behaviors that appear when a large collection of things acts as one. Combinations of lifeless atoms give rise to living cells; water molecules create waves; murmurations of starlings swoop through the sky in changing but identifiable patterns; cells make muscles move and hearts beat. Critically, emergent abilities show up in systems that involve lots of individual parts. But researchers have only recently been able to document these abilities in LLMs as those models have grown to enormous sizes.

Language models have been around for decades. Until about five years ago, the most powerful were based on what’s called a recurrent neural network. These essentially take a string of text and predict what the next word will be. What makes a model “recurrent” is that it learns from its own output: Its predictions feed back into the network to improve future performance.

In 2017, researchers at Google Brain introduced a new kind of architecture called a transformer. While a recurrent network analyzes a sentence word by word, the transformer processes all the words at the same time. This means transformers can process big bodies of text in parallel.

Transformers enabled a rapid scaling up of the complexity of language models by increasing the number of parameters in the model, as well as other factors. The parameters can be thought of as connections between words, and models improve by adjusting these connections as they churn through text during training. The more parameters in a model, the more accurately it can make connections, and the closer it comes to passably mimicking human language. As expected, a 2020 analysis by OpenAI researchers found that models improve in accuracy and ability as they scale up.

But the debut of LLMs also brought something truly unexpected. Lots of somethings. With the advent of models like GPT-3, which has 175 billion parameters — or Google’s PaLM, which can be scaled up to 540 billion — users began describing more and more emergent behaviors. One DeepMind engineer even reported being able to convince ChatGPT that it was a Linux terminal and getting it to run some simple mathematical code to compute the first 10 prime numbers. Remarkably, it could finish the task faster than the same code running on a real Linux machine.

As with the movie emoji task, researchers had no reason to think that a language model built to predict text would convincingly imitate a computer terminal. Many of these emergent behaviors illustrate “zero-shot” or “few-shot” learning, which describes an LLM’s ability to solve problems it has never — or rarely — seen before. This has been a long-time goal in artificial intelligence research, Ganguli said. Showing that GPT-3 could solve problems without any explicit training data in a zero-shot setting, he said, “led me to drop what I was doing and get more involved.”

He wasn’t alone. A raft of researchers, detecting the first hints that LLMs could reach beyond the constraints of their training data, are striving for a better grasp of what emergence looks like and how it happens. The first step was to thoroughly document it.

Tranformers: More Than Meets The Eye?

quantamagazine  |  Imagine going to your local hardware store and seeing a new kind of hammer on the shelf. You’ve heard about this hammer: It pounds faster and more accurately than others, and in the last few years it’s rendered many other hammers obsolete, at least for most uses. And there’s more! With a few tweaks — an attachment here, a twist there — the tool changes into a saw that can cut at least as fast and as accurately as any other option out there. In fact, some experts at the frontiers of tool development say this hammer might just herald the convergence of all tools into a single device.

A similar story is playing out among the tools of artificial intelligence. That versatile new hammer is a kind of artificial neural network — a network of nodes that “learn” how to do some task by training on existing data — called a transformer. It was originally designed to handle language, but has recently begun impacting other AI domains.

The transformer first appeared in 2017 in a paper that cryptically declared that “Attention Is All You Need.” In other approaches to AI, the system would first focus on local patches of input data and then build up to the whole. In a language model, for example, nearby words would first get grouped together. The transformer, by contrast, runs processes so that every element in the input data connects, or pays attention, to every other element. Researchers refer to this as “self-attention.” This means that as soon as it starts training, the transformer can see traces of the entire data set.

Before transformers came along, progress on AI language tasks largely lagged behind developments in other areas. “In this deep learning revolution that happened in the past 10 years or so, natural language processing was sort of a latecomer,” said the computer scientist Anna Rumshisky of the University of Massachusetts, Lowell. “So NLP was, in a sense, behind computer vision. Transformers changed that.”

Transformers quickly became the front-runner for applications like word recognition that focus on analyzing and predicting text. It led to a wave of tools, like OpenAI’s Generative Pre-trained Transformer 3 (GPT-3), which trains on hundreds of billions of words and generates consistent new text to an unsettling degree.

The success of transformers prompted the AI crowd to ask what else they could do. The answer is unfolding now, as researchers report that transformers are proving surprisingly versatile. In some vision tasks, like image classification, neural nets that use transformers have become faster and more accurate than those that don’t. Emerging work in other AI areas — like processing multiple kinds of input at once, or planning tasks — suggests transformers can handle even more.

“Transformers seem to really be quite transformational across many problems in machine learning, including computer vision,” said Vladimir Haltakov, who works on computer vision related to self-driving cars at BMW in Munich.

Just 10 years ago, disparate subfields of AI had little to say to each other. But the arrival of transformers suggests the possibility of a convergence. “I think the transformer is so popular because it implies the potential to become universal,” said the computer scientist Atlas Wang of the University of Texas, Austin. “We have good reason to want to try transformers for the entire spectrum” of AI tasks.

Wednesday, February 15, 2023

Who Masters These Technologies In Some Ways Will Be Master Of The World

Vox  |   In an economic race with enormous winner-takes-all stakes, a company is primarily thinking about whether to deploy their system before a competitor. Slowing down for safety checks risks that someone else will get there first. In geopolitical AI arms race scenarios, the fear is that China will get to AI before the US and have an incredibly powerful weapon — and that, in anticipation of that, the US may push its own unready systems into widespread deployment.

Even if alignment is a very solvable problem, trying to do complex technical work on incredibly powerful systems while everyone is in a rush to beat a competitor is a recipe for failure.

Some actors working on artificial general intelligence, or AGI, have planned significantly to avoid this dangerous trap: OpenAI, for instance, has terms in its charter specifically aimed at preventing an AI race once systems are powerful enough: “We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”

I am generally optimistic about human nature. No one actively wants to deploy a system that will kill us all, so if we can get good enough visibility into the problem of alignment, then it’ll be clear to engineers why they need a solution. But eager declarations that the race is on make me nervous.

Another great part of human nature is that we are often incredibly competitive — and while that competition can lead to great advancements, it can also lead to great destruction. It’s the Cold War that drove the space race, but it was also WWII that drove the creation of the atomic bomb. If winner-takes-all competition is the attitude we bring to one of the most powerful technologies in human history, I don’t think humanity is going to win out.

Friday, January 06, 2023

Has Russia Already Mastered High-End Lithography?

smoothiex12  |  I am constantly on record that Russian Ministry of Defense is well supplied (due to cannibalizing of washing machines, I guess) with all kinds of microchips, including ASIC and what have you. All this, due to boutique production which is fully localized. Otherwise, one may ask, how did Russians manage to manufacture now their satellites with 100% Russian element base and how come that Russians openly state that their NTSUO main supercomputer is more powerful than anything Pentagon's NMCC has

The answer is simple. Read this (in Russian). 

Российский литограф 7 нм от ИПФ РАН! Литограф от НЦФМ за 2-3 года! Понеслось!

Translation: Russian lithograph for 7 nm from Institute of  Applied Physics of Russian Academy of Sciences. Lithograph from National Center of Physics and Mathematics in 2-3 years. Off we go! 

As it turned out, Russia had working prototype for 30 nm in... 2011.

After that, all data following 2011 was... removed. Now a puzzle. Look what newly created National Center of Physics and Mathematics is (in Russian)? Or, rather, who runs the whole show? Yep, it is in Sarov and it is, of course, Rosatom. Now, lets go back to 2011 and ask ourselves a question--WHY Sergei Kirienko who headed Rosatom from 2005 through 2016, and now is the second person, after Vaino, in Putin's Staff, was awarded in 2018 the highest honor of the Hero of Russian Federation, together with Yuri Borisov, with a vague description "for achievements in developing nuclear industry." And, naturally, weapons (in Russian)

So, let's summarize. In 2011 Russia already has a working prototype lithograph for 30 nm structures. Then, in 2014 Russia unveils NTsUO and claims that supercomputer in it is way more powerful than Pentagon's, then Rosatom effectively builds Russia's composite materials industry, then we have some new reactors coming on-line, and then, of course, we have hypersonic revolution in 2018. Just this short list tells you that this whole thing, requiring an immense computing power, hasn't been done on Pentium 4 processors alone. But where did Russia get those hi-end processors and, in the end, stated recently that fully Russian-made lithography is coming very soon. Well, we are now getting some whiff of the proceedings, which a few years ago I named a "revelation mode"

As I am on record constantly, one has to be able to read news properly and not miss all those important details. But above all, we need to understand how truly high level strategic planning is done and why Russia was able to withstand all Western sanctions and sabotage and, in fact, benefited from that strategically. One has to assume with a very high probability that modelling of technological, industrial, military and, in the end, geopolitical trends has been done on something which we didn't see yet. What is known now that it is some extremely capable computation on something which is fully domestically made. But the signs and clues have been around for a long time now. How do you think you design something like 3M22 Zircon or Peresvet with Avangard. I guess, we've got part of the answer. But I am on record, the nation which produces all that will produce modern chip industry sooner or later. Looks like it is going to be sooner, and don't tell me I didn't warn you;)   

Has China Leapfrogged ASML EUV Lithography?

reuters  |  The chief executive of ASML Holding NV, the Dutch semiconductor equipment maker, on Tuesday questioned whether a U.S. push to get the Netherlands to adopt new rules restricting exports to China make sense.

"Maybe they think we should come across the table, but ASML has already sacrificed," CEO Peter Wennink said in an interview with newspaper NRC Handelsblad.

He said that following U.S. pressure, the Dutch government has already restricted ASML from exporting its most advanced lithography machines to China since 2019, something he said has benefited U.S. companies selling alternative technology.

He said that while 15% of ASML's sales are in China, at U.S. chip equipment suppliers "it is 25 or sometimes more than 30%".

A spokesperson for ASML confirmed the remarks in the interview were accurate but declined further comment.

The Biden administration issued new export rules for U.S. companies in October aimed at cutting off China's ability to manufacture advanced semiconductor chips in a bid to slow its military and technological advances.

Washington is urging the Netherlands, Japan and other unspecified countries with companies that make cutting edge manufacturing equipment to adopt similar rules. The Dutch trade minister has confirmed talks are ongoing.

Wennink said it seemed contradictory that U.S. chip manufacturers are able to sell their most advanced chips to Chinese customers, while ASML is only able to sell older chipmaking equipment.

 

 

 

Tuesday, December 20, 2022

Nuclear Engine For Rocket Vehicle Application

autoevolution |  We don't blame you if you're shocked the United States wielded a nuclear spacecraft engine as far back as the 1960s. You're probably even more shocked that hardly anyone remembers it. The Nuclear Engine for Rocket Vehicle Application (NERVA) project would've been nothing short of a crown jewel program for any other research team. But not for New Mexico's Los Alamos Laboratories.

That's right; the NERVA engine was developed by the same team who brought the world the first nuclear-fission weapons. The very same that helped end World War II. If there was ever a project substantial or significant enough to overshadow literal nuclear rocket engines, that certainly fits the description. For Los Alamos scientists and engineers, it makes sense the first logical step post-Manhattan Project would be in the direction of rocket engines.

Come the end of the Second World War, novel German rocket science from future NASA personnel like Wernher Von Braun was now in the hands of the Americans. But while the V2 chemical rocket was nothing short of witchcraft to average folks in the mid-1940s, it wouldn't be long for experts to ask if there was another, more powerful means of fueling rocket engines.

In the following decade, a torrent of proposals across America for nuclear-powered planes, trains, and automobiles defined the 1950s as the start of the atomic era. Right alongside preposterous ideas like Ford's Nucleon passenger car was one of the first working concepts for a nuclear fission-powered thermal rocket. One that, in theory, could provide power and fuel economy no traditional chemical rocket could ever dream of. 

Though any number of nuclear isotopes could theoretically do the job, Los Alamos Labs and Westinghouse chose enriched Uranium-235 for the NERVA application. This choice was made because U-235 is lighter and less prone to super-criticality than its Uranium-238 cousin. As a result, it has the potential for an incredibly high measurement of what rocket scientists call a specific impulse. 

With the potential to heat hydrogen fuel to 2,400 Kelvin (3860.3°F, 2126°C), the NERVA engine could have provided American spacecraft with exceptional performance while not being so wasteful that it couldn't conserve fuel for an entire mission. The potential for space exploration seemed palpable during the NERVA development. Be it traveling to near planets like Mars and Venus or even places farther off like the Asteroid Belt. It was all suddenly theoretically possible.

In August 1960, the recently formed NASA established the Space Nuclear Propulsion Office with the sole purpose of overseeing the NERVA program and any developments made afterward. With offices in Germantown, Maryland, Cleveland, Ohio, and Albuquerque, New Mexico, the resources and personnel required to keep the program running spanned the continental U.S.  

Six NERVA technology demonstrators were built between 1964 and 1973. The highest power threshold NASA could muster during testing was a scarcely believable 246,663 newtons (55,452 lbf) of thrust and a specific impulse of 710 seconds (7.0 km/s) in the NERVA  Alpha variant. This engine could theoretically operate in deep space and maintain this level of thrust throughout the duration of a space mission. So you can only imagine what NASA may have had planned.

Records indicate Wernher Von Braun envisioned a successor booster rocket to the Saturn V, called the Nova series. Had it been built, the nuclear/chemical hybrid rocket would have joined the Space Shuttle in a spacecraft fleet that would have been nothing short of astonishing. One can only imagine how humans could have landed on the surface of Mars by the early 1980s had everything gone to plan.

Underwater Supersonic Objects

dailymail |  As swimmers know, moving cleanly through the water can be a problem due o the huge amounts of drag created - and for submarines, this is even more of a problem.

However, US Navy funded researchers say they have a simple solution - a bubble.

Researchers at Penn State Applied Research Laboratory are developing a new system using a technique called supercavitation.

The new idea is based on Soviet technology developed during the cold war.

Called supercavitation, it envelopes a submerged vessel inside an air bubble to avoid problems caused by water drag.

A Soviet supercavitation torpedo called Shakval was able to reach a speed of 370km/h or more - much faster than any other conventional torpedoes.

In theory, a supercavitating vessel could reach the speed of sound underwater, or about 5,800km/h.

This would reduce the journey time for a transatlantic underwater cruise to less than an hour, and for a transpacific journey to about 100 minutes, according to a report by California Institute of Technology in 2001.

However, the technique also results in a bumpy ride - something the new team has solved. 

'Basically supercavitation is used to significantly reduce drag and increase the speed of bodies in water,' said Grant M. Skidmore, recent Penn State Ph.D. recipient in aerospace engineering.

'However, sometimes these bodies can get locked into a pulsating mode.'

Creating a supercavitation bubble and getting it to pulsate in order to stop the pulsations inside a rigid-walled water tunnel tube had not been done.

'Eventually we ramped up the gas really high and then way down to get pulsation,' said Jules W. Lindau, senior research associate at ARL and associate professor of aerospace engineering.

They found that once they had supercavitation with pulsation, they could moderate the air flow and, in some cases, stop pulsation.

'Supercavitation technology might eventually allow high speed underwater supercavitation transportation,' said Moeney.  

China is also developing a'supersonic' submarine that could travel from Shanghai to San Francisco in less than two hours.

Researchers say their new craft uses a radical new technique to create a 'bubble' to surround itself, cutting down drag dramatically. 

In theory, the researchers say, a supercavitating vessel could reach the speed of sound underwater, or about 5,800km/h. 

The technology was developed by a team of scientists at Harbin Institute of Technology's Complex Flow and Heat Transfer Lab. 

Li Fengchen, professor of fluid machinery and engineering, told the South China Morning Post he was 'very excited by its potential'. 

The new sub is based on Soviet technology developed during the cold war.

Wednesday, December 14, 2022

Russia: Gas Station With Nuclear Weapons That Has Mastered The Complete Nuclear Fuel Cycle

nuclear-news  |  The plutonium for this was produced from uranium during the operation of other nuclear power plants and recovered from the used fuel assemblies through reprocessing.

MOX fuel is manufactured from plutonium recovered from used reactor fuel, mixed with depleted uranium which is a by-product from uranium enrichment.

“Full conversion of the BN-800 to MOX fuel is a long-anticipated milestone for the nuclear industry. For the first time in the history of Russian nuclear power, we proceed to operation of a fast neutron reactor with a full load of uranium-plutonium fuel and closed nuclear fuel cycle,” said Alexander Ugryumov, Senior Vice President for Research and Development at TVEL JSC.

“This is the original reason and target why the BN-800 was developed, and why Rosatom built the unique automated fuel fabrication facility at the Mining and Chemical Combine. Advanced technologies of fissile materials recycling and re-fabrication of nuclear fuel will make it possible to expand the resource feed-stock of the nuclear power, reprocess irradiated fuel instead of storing it, and to reduce the volumes of waste.”

​The unit is a sodium-cooled fast reactor which produces about 820 MWe. It started operation in 2016 and in 2020 achieved a capacity factor of 82% despite having an experimental role in proving reactor technologies and fuels.

Friday, June 17, 2022

Valodya's Meeting With The Head Of Rosnano State Corporation Sergei Kulikov

kremlin.ru  |  Vladimir Putin:  Sergey Alexandrovich, the company started operating in its original form in 2007. During this time, 150, in my opinion, enterprises have been created, several tens of thousands of jobs - somewhere under 40 thousand.

Let's talk about the results of the work in general.

Sergei Kulikov: Mr  President, this is indeed true.

You, as the founder and ideologist of this program, know better than anyone else that this is not just a state corporation, not just a joint-stock company, and not even just a development institution – it is a symbol of investing in science, technology, and the future.

I will try to focus my report on three aspects: technology, science and education, and money.

Indeed, 150 enterprises have been created, and nanotechnologies have taken root in six technological clusters. This is electronics, these are the actual materials, this is optronics, this is the disposal of even municipal solid waste. In terms of science, 53 billion rubles were spent on R&D. One and a half thousand students graduate annually from nanotechnology departments in 28 universities in the countries.

As we said in December, we have not yet launched a program, but an initiative for mathematical modeling of materials, and it has already begun to show results in prototypes. We didn’t just start, for example, in the MISiS laboratory, we increased the properties of thermoelectrics by 30–40 percent due to mathematical modeling, and today we have already launched the next cycle this year – with major players who are now beginning to understand that everything starts with materials .

Finance: we pay off debts, last year we paid off the first 20 billion. For those with whom we agree on a discount, we, of course, meet halfway, but the interest accumulates. I have prepared several proposals, I will report to you.

The good news is that, given that 233 billion rubles were invested in Rusnano over the years you mentioned, by 2020, 155 billion rubles were received from exits from the portfolio, from assets. We added another 50 billion rubles to this piggy bank last year, thus, we have overcome the psychological barrier of 200 billion rubles, equaling the investment costs, which, we think, confirms the overall profitability of our activities.

Returning to the fact that after all this is a symbol, and not just a joint-stock company, I would like to emphasize that over these ten years it has been proven that nanotechnology is necessary, that it is achievable and that a competitive product cannot be obtained today without immersion in the morphology of the material. And this is probably really worth investing in - it's time to invest in it right now.

How rich are we today? First, there are three professions.

Nanotechnologist. To be honest, I myself tried to master it externally, but I realized that it was better to do my own thing, create conditions for replenishing the army of process engineers and nanotechnologists.

The second important profession is the technology entrepreneur. And we have already launched one startup studio at ITMO [National Research University] as part of the University Technological Entrepreneurship program under the auspices of the Ministry of Education – it has already begun to give interesting results, and we have 14 [startup studios] in our plan this year. Just the task is to get from idea to product much faster.

And the third profession is an investor in science and technology, I would say so. This is a translator between business and science, who knows what money is being collected for today, and sets such a task for scientists and, on the contrary, looks for what scientists invent, and collects money for this.

As for the portfolio: we have 51 assets left today, of which 18 are in varying degrees of problem. As an example, the Novosibirsk Liotech is a manufacturer of accumulators and batteries. An old, “bearded” story: the enterprise went bankrupt several times, we tried to restart it, but in the end we save, first of all, the team, intellectual property. We have found a use for them: together with Rosseti - Rosseti Center - in eleven regions we operate system storage devices, we have successfully overcome the autumn-winter period in small towns with virtually no accidents. Today we are already developing the next generation of these solutions.

We have postponed sales plans for 13 companies until 2023-2024 because the need for them today to maintain critical infrastructure has become apparent. I will give examples.

For example, the Perm Novomet is an excellent company that produces submersible sediments for the oil industry. In general, we expect that if we present them as an assembly point, we will be able to collect such competencies in order to become an alternative supplier in principle or replace those who today decide to change the market.

“Russian membranes” in Vladimir are, one might say, the heart and basis of water treatment in general, not only desalination, which is used in the countries of the [Persian] Gulf, we are actively working with them, but also water purification, which is especially important today. You know perfectly well that we have agreed with the two governors, and now we are piloting these decisions.

Optovolokno is a Saransk enterprise in Mordovia, the governor, the Ministry of Industry and Trade and I agreed to develop it to ...

Vladimir Putin:  We need source materials.

Sergei Kulikov:  Of course. We will now finish building another redistribution in order to ensure sustainability.

And of course, today three American and Japanese suppliers have left the market, and we are now competing only with the Chinese, which is difficult, for a place in the energy cable and telecom cable. But it is also a very interesting task, you can grow it well.

We have, as it were, pushed these assets aside, but we will still go into the strategy so that a private investor joins this task.

Vladimir Putin:  Is this realistic? Do you think you will do it?

Sergei Kulikov:  We have no choice. How not to? Especially in today's environment: people need to communicate, networks need to be managed. There is no choice, it must be done by any means. And, even if we can’t deliver something, then it will be necessary to look for ways to produce it.

We prepared 20 assets for sale, including foreign ones. For example, assets known to you in the field of alternative energy. We are leaving them and reconfiguring the teams for new tasks. That is, for example, our power engineers will be engaged in small-scale generation, the same system drives, that is, some kind of hybrid solutions that can be applied today.

We left two waste incinerators, and we began to apply this competence, we began to look for new technologies. We discovered a wonderful solution for ash-free disposal: we built two reactors, now Rosprirodnadzor does not get out of there and is surprised, but still looking for there to be no mistake. That is, we do not have emissions, because there is no combustion, and I will also show you this solution after the report.

Manufacturer of nanotubes - you know about it. In general, he went all the way from a startup - the first four stages of technology maturation - to an IPO. This, in fact, illustrates the general function of Rosnano, when we pick up from the first to the fourth stage, from the fourth to the eighth, and then bring it to the market or become a strategic partner.

We joined forces with the founders and this year brought the nanotube to use in the automotive components of electric vehicles and are now piloting it on the road surface. For example, on the [highway] Moscow-Don, a nanotube was added to the asphalt material and we are surprised that at plus 50 degrees a rut is not formed. It seems to me that this generally deserves a separate development, perhaps on some more than a ten-year program, in order to see how our roads can be effectively used.

All this led to a total - like word of mouth, investors began to come to us, and we grew in the portfolio by 30 percent over the past year. For the entire period of work of Rosnano - until 2020 - 65 billion rubles of extra-budgetary funds were attracted. We raised 68 [billion] in projects last year, of which only four are our own funds, the rest is external financing.

It seems to me, if we talk about further reincarnation, that Rosnano, if you remember, went from a state corporation to a joint-stock company, that is, it is probably time to think about a public-private partnership. That is, in newly created funds, we can, in principle, already increase the share of a private investor. We have such an ambition in the strategy that we will attract in the first half of its implementation in the proportion of one to four, that is, for one ruble state or quasi-state four foreign, and by the end of the implementation period - one to eight.

The team was rebooted, with respect to the founders, in fact, we are even forming the club of the university "Rosnano". We attracted a lot of young colleagues, added competencies that we lacked, and based on the previously created groundwork and the groundwork that we have already formed today, we are looking at projects in the field of ecology, healthcare, mobility, energy and security, of course.

Vladimir Putin:  But you and I understand that in this regard, one of the key tasks is to take further steps to improve the financial situation.

Sergei Kulikov:  Of course.

Israel Cannot Lie About Or Escape Its Conspicuous Kinetic Vulnerability

nakedcapitalism |   Israel has vowed to respond to Iran’s missile attack over the last weekend, despite many reports of US and its allies ...