Saturday, March 25

A New Kind of thinking


"A New Kind of Science," by Stephen Wolfram was a book I read over ten years ago just after it was first published.    
Stephen Wolfram is an interesting person who was born in London and educated at Oxford, who gained a Ph.D. in theoretical physics at only 20 years of age. In the 1980’s he made a series of discoveries that he posited yielded new insights into physics, mathematics, computer science, and biology to shed new light on how everything tends to work in our universe. Although his work did initially attract some criticism  within the scientific community it seems his work ultimately has provided food for thought on conceptual  puzzles and longstanding issues in philosophy. 

In 1986 he founded Mathematica, which has become one of the world’s leading software systems for technical and symbolic programs.

The book represented 10 years of research and would be of interest to both scientists and non-scientists alike. The use of graphics further enhances the presentation and a reader’s easy grasp of his conclusions.  
The basis of his experiments are in the observance of behavioural output of simple program instructions or rules. By way of example we are all familiar with the operation of software that produces our pay, invoicing records or keeps vast quantities of information stored in virtual reality in what is described as the “cloud”.

 Wolfram wrote simple programs to observe their outputs. He used black and white squares but you could use any items –blue and white shaped beads if they took your fancy, since it was only the output behaviours of those cells that is the subject of the experiment. Hence he calls the output cellular automata.

His early experiments yielded results markedly different to what he had anticipated so he became interested in this phenomena which is encapsulated within the pages of this book.
To reiterate however these computer programs can be described more accurately simply as rules. The programs tell the computer to carry out some instructions for a specified interval. There is no intention to achieve a result other than to see what happens. He begins with basic programming rules and builds up to a very mildly complex instruction.

E.g. the so-called rule 30.

An elaboration is as follows: Start with a single black square, and repeat the rule line by line e.g. first, look at each cell and its right hand neighbour. If both of these were white on the previous step, then take the new colour of the cell to be whatever the previous colour of its left-hand neighbour was, otherwise, make the new colour the opposite of that.

Your intuition would tell you if you followed such a programming rule there should be some sort of repetitive pattern to appear in the cellular data output, since the same rule is being applied over and over again.

The effect of this program after 1,500 steps involving 2 million cells is there are no signs of any regularity and the pattern obtained seem to continue to evolve.

 Hence the book of 800 pages involves hundreds of experiments demonstrating this principle. The subject of cellular-automata and its principles has been debated in a variety of forums from philosophy to how better to sustain ecology systems to their existing prior condition before global warming. For a reference to how it is applied to Ecological Modelling click on the reference as per below: . 

Application to eco systems.   

From a philosophical stance a paper from Stanford University introduces the idea of the “Hat experiment” to explain how it works.   

Reference : Bert, Francesco and Tagliabue, Jacopo, "Cellular Automata", The Stanford Encyclopedia of Philosophy (Spring 2017 Edition), Edward N. Zlatan (ed.), URL = https://plato.stanford.edu/archives/spr2017/entries/cellular-automata/)

An example of cellular automata as in Think of Fig. 1 as standing for the front row of a high school classroom. Each box represents a student wearing (black) or not wearing (white) a hat. Let us make the two following assumptions:

Hat rule: a student will wear the hat in the following class if one or the other—but not both—of the two classmates sitting immediately on her left and on her right has the hat in the current class (let us say that if nobody wears the hat, then a hat is out of fashion; but if both neighbors wear it, it is too popular to be trendy).

Initial class: during the first class in the morning, only one student in the middle is wearing the hat
What happens as time goes by (consecutive rows represent the evolution in time through subsequent classes):
What happens is surprising. The complex evolutionary pattern displayed contrasts with the simplicity of the underlying law (the “Hat rule”) and ontology (for in terms of object and properties, we only need to take into account simple atoms and two states. In a sense, though, the global, emergent behaviour of the system supervenes upon its local, simple features. The scale at which the decision to wear the hat is made (immediate neighbours) is not the scale at which the interesting patterns become manifest.
While somewhat artificial, this example is a paradigmatic illustration of what makes CA appealing to a vast range of researchers: “even perfect knowledge of individual decision rules does not always allow us to predict macroscopic structure. We get macro-surprises despite complete micro-knowledge” (Epstein 1999, p. 48). Since the notion of emergence and the micro-macro interplay play such an important role in science and philosophy (see the entries on supervenience and emergent properties; for a sample of scientific applications, see Mitchell 2009, pp. 2–13, Gell-Mann 1994, Ch. 9), it has been suggested that many scientific as well as conceptual puzzles can be addressed by adopting the CA perspective. One of the leading thinkers in the field, Stephen Wolfram, has gone as far as claiming that CA may help us to solve longstanding issues in philosophy:

Hence Wolfram’s experiments show very complex systems can be built up from very basic underlying instructions or beginnings. During the course of the book he shows the implications to many fields of current knowledge.

Fast forward to Wolfram’ s blog and you get further insight into more recent thinking which I have reproduced here. 
A Short Talk on AI Ethics


Click on the link for graphical examples of cellular automata.  
October 17, 2016

Last week I gave a talk (and did a panel discussion) at a conference entitled “Ethics of Artificial Intelligence” held at the NYU Philosophy Department’s Center for Mind, Brain and Consciousness. Here’s the video and a transcript:
Thanks for inviting me here today.
You know, it’s funny to be here. My mother was a philosophy professor in Oxford. And when I was a kid I always said the one thing I’d never do was do or talk about philosophy. But, well, here I am.
Before I really get into AI, I think I should say a little bit about my worldview. I’ve basically spent my life alternating between doing basic science and building technology. I’ve been interested in AI for about as long as I can remember. But as a kid I started out doing physics and cosmology and things. That got me into building technology to automate stuff like math. And that worked so well that I started thinking about, like, how to really know and compute everything about everything. That was in about 1980—and at first I thought I had to build something like a brain, and I was studying neural nets and so on. But I didn’t get too far.
And meanwhile I got interested in an even bigger problem in science: how to make the most general possible theories of things. The dominant idea for 300 years had been to use math and equations. But I wanted to go beyond them. And the big thing I realized was that the way to do that was to think about programs, and the whole computational universe of possible programs.

And that led to my personal Galileo-like moment. I just pointed my “computational telescope” at these simplest possible programs, and I saw this amazing one I called rule 30—that just seemed to go on producing complexity forever from essentially nothing.
Well, after I’d seen this, I realized this is actually something that happens all over the computational universe—and all over nature. It’s really the secret that lets nature make all the complicated stuff we see. But it’s something else too: it’s a window into what raw, unfettered computation is like. At least traditionally when we do engineering we’re always building things that are simple enough that we can foresee what they’ll do.
But if we just go out into the computational universe, things can be much wilder. Our company has done a lot of mining out there, finding programs that are useful for different purposes, like rule 30 is for randomness. And modern machine learning is kind of part way from traditional engineering to this kind of free-range mining.
But, OK, what can one say in general about the computational universe? Well, all these programs can be thought of as doing computations. And years ago I came up with what I call the Principle of Computational Equivalence—that says that if behavior isn’t obviously simple, it typically corresponds to a computation that’s maximally sophisticated. There are lots of predictions and implications of this. Like that universal computation should be ubiquitous. As should undesirability. And as should what I call computational irreducibility.


Can you predict what it’s going to do? Well, it’s probably computationally irreducible, which means you can’t figure out what it’s going to do without effectively tracing every step and going through the same computational effort it does. It’s completely deterministic. But to us it’s got what seems like free will—because we can never know what it’s going to do.

Here’s another thing: what’s intelligence? Well, our big unifying principle says that everything—from a tiny program, to our brains, is computationally equivalent. There’s no bright line between intelligence and mere computation. The weather really does have a mind of its own: it’s doing computations just as sophisticated as our brains. To us, though, it’s pretty alien computation. Because it’s not connected to our human goals and experiences. It’s just raw computation that happens to be going on.

So how do we tame computation? We have to mold it to our goals. And the first step there is to describe our goals. And for the past 30 years what I’ve basically been doing is creating a way to do that.
I’ve been building a language—that’s now called the Wolfram Language—that allows us to express what we want to do. It’s a computer language. But it’s not really like other computer languages. Because instead of telling a computer what to do in its terms, it builds in as much knowledge as possible about computation and the world, so that we humans can describe in our terms what we want, and then it’s up to the language to get it done as automatically as possible.

This basic idea has worked really well, and in the form of Mathematica it’s been used to make endless inventions and discoveries over the years. It’s also what’s inside Wolfram Alpha. Where the idea is to take pure natural language questions, understand them, and use the kind of curated knowledge and algorithms of our civilization to answer them. And, yes, it’s a very classic Alisha thing. And of course it’s computed answers to billions and billions of questions from humans, for example inside Siri.
I had an interesting experience recently, figuring out how to use what we’ve built to teach computational thinking to kids. I was writing exercises for a book. At the beginning, it was easy: “make a program to do X”. But later on, it was like “I know what to say in the Wolfram Language, but it’s really hard to express in English”. And of course that’s why I just spent 30 years building the Wolfram Language.

English has maybe 25,000 common words; the Wolfram Language has about 5000 carefully designed built-in constructs—including all the latest machine learning—together with millions of things based on curated data. And the idea is that once one can think about something in the world computationally, it should be as easy as possible to express it in the Wolfram Language. And the cool thing is, it really works. Humans, including kids, can read and write the language. And so can computers. It’s a kind of high-level bridge between human thinking, in its cultural context, and computation.
OK, so what about AI? Technology has always been about finding things that exist, and then taming them to automate the achievement of particular human goals. And in AI the things we’re taming exist in the computational universe. Now, there’s a lot of raw computation seething around out there—just as there’s a lot going on in nature. But what we’re interested in is computation that somehow relates to human goals.
So what about ethics? Well, maybe we want to constrain the computation, the AI, to only do things we consider ethical. But somehow we have to find a way to describe what we mean by that.
Well, in the human world, one way we do this is with laws. But so how do we connect laws to computations? We may call them “legal codes”, but today laws and contracts are basically written in natural language. There’ve been simple computable contracts in areas like financial derivatives. And now one’s talking about smart contracts around cryptocurrencies.

But what about the vast mass of law? Well, Leibniz—who died 300 years ago next month—was always talking about making a universal language to, as we would say now, express it all in a computable way. He was a few centuries too early, but I think now we’re finally in a position to do this.
I just posted a long blog about all this last week, but let me try to summarize. With the Wolfram Language we’ve managed to express a lot of kinds of things in the world—like the ones people ask Siri about. And I think we’re now within sight of what Leibniz wanted: to have a general symbolic discourse language that represents everything involved in human affairs.
I see it basically as a language design problem. Yes, we can use natural language to get clues, but ultimately we have to build our own symbolic language. It’s actually the same kind of thing I’ve done for decades in the Wolfram Language. Take even a word like “plus”. Well, in the Wolfram Language there’s a function called Plus, but it doesn’t mean the same thing as the word. It’s a very specific version that has to do with adding things mathematically. And as we design a symbolic discourse language, it’s the same thing. The word “eat” in English can mean lots of things. But we need a concept—that we’ll probably refer to as “eat”—that’s a specific version that we can compute with.
So let’s say we’ve got a contract written in natural language. One way to get a symbolic version is to use natural language understanding—just like we do for billions of Wolfram Alpha inputs, asking humans about ambiguities. Another way might be to get machine learning to describe a picture. But the best way is just to write in symbolic form in the first place, and actually I’m guessing that’s what lawyers will be doing before too long.
And of course once you have a contract in symbolic form, you can start to compute about it, automatically seeing if it’s satisfied, simulating different outcomes, automatically aggregating it in bundles, and so on. Ultimately the contract has to get input from the real world. Maybe that input is “born digital”, like data about accessing a computer system, or transferring bitcoin. Often it’ll come from sensors and measurements—and it’ll take machine learning to turn into something symbolic.
Well, if we can express laws in computable form maybe we can start telling AIs how we want them to act. Of course it might be better if we could boil everything down to simple principles, like Asimov’s Laws of Robotics, or utilitarianism or something.
But I don’t think anything like that is going to work. What we’re ultimately trying to do is to find perfect constraints on computation, but computation is something that’s in some sense infinitely wild. The issue already shows up in Gödel’s Theorem. Like let’s say we’re looking at integers and we’re trying to set up axioms to constrain them to just work the way we think they do. Well, what Gödel showed is that no finite set of axioms can ever achieve this. With any set of axioms you choose, there won’t just be the ordinary integers; there’ll also be other wild things.
And the phenomenon of computational irreducibility implies a much more general version of this. Basically, given any set of laws or constraints, there’ll always be “unintended consequences”. This isn’t particularly surprising if one looks at the evolution of human law. But the point is that there’s theoretically no way around it. It’s ubiquitous in the computational universe.
Now I think it’s pretty clear that AI is going to get more and more important in the world—and is going to eventually control much of the infrastructure of human affairs, a bit like governments do now. And like with governments, perhaps the thing to do is to create an AI Constitution that defines what AIs should do.
What should the constitution be like? Well, it’s got to be based on a model of the world, and inevitably an imperfect one, and then it’s got to say what to do in lots of different circumstances. And ultimately what it’s got to do is provide a way of constraining the computations that happen to be ones that align with our goals. But what should those goals be? I don’t think there’s any ultimate right answer. In fact, one can enumerate goals just like one can enumerate programs out in the computational universe. And there’s no abstract way to choose between them.
But for us there’s a way to choose. Because we have particular biology, and we have a particular history of our culture and civilization. It’s taken us a lot of irreducible computation to get here. But now we’re just at some point in the computational universe that corresponds to the goals that we have.
Human goals have clearly evolved through the course of history. And I suspect they’re about to evolve a lot more. I think it’s pretty inevitable that our consciousness will increasingly merge with technology. And eventually maybe our whole civilization will end up as something like a box of a trillion uploaded human souls.
But then the big question is: “what will they choose to do?” Well, maybe we don’t even have the language yet to describe the answer. If we look back even to Leibniz’s time, we can see all sorts of modern concepts that hadn’t formed yet. And when we look inside a modern machine learning or theorem proving system, it’s humbling to see how many concepts it effectively forms—that we haven’t yet absorbed in our culture.
Maybe looked at from our current point of view, it’ll just seem like those disembodied virtual souls are playing videogames for the rest of eternity. At first maybe they’ll operate in a simulation of our actual universe. Then maybe they’ll start exploring the computational universe of all possible universes.
But at some level all they’ll be doing is computation—and the Principle of Computational Equivalence says it’s computation that’s fundamentally equivalent to all other computation. It’s a bit of a letdown. Our proud future ending up being computationally equivalent just too plain physics, or to little rule 30.
Of course, that’s just an extension of the long story of science showing us that we’re not fundamentally special. We can’t look for ultimate meaning in where we’ve reached. We can’t define an ultimate purpose. Or ultimate ethics. And in a sense we have to embrace the details of our existence and our history.
There won’t be a simple principle that encapsulates what we want in our AI Constitution. There’ll be lots of details that reflect the details of our existence and history. And the first step is just to understand how to represent those things. Which is what I think we can do with a symbolic discourse language.
And, yes, conveniently I happen to have just spent 30 years building the framework to create such a thing. And I’m keen to understand how we can really use it to create an AI Constitution.
So I’d better stop talking about philosophy, and try to answer some questions.

After the talk there was a lively Q&A (followed by a panel discussion), included on the video.  Some questions were:

Conclusion
I have inserted my own brief answer to the following philosophical questions but it would be interesting to know what others think.

When will AI reach human-level intelligence?
My own view is although I concede such things as quantum computers may reach the same speed of computation I don’t believe AI can ever reach the human brain level of intelligence.
:       Do we live in a deterministic universe?
In big scale physics yes, but concurrent to that I believe there exists a causality at the smaller end of the scale as is evident in a certain creative freedom.
·        Is our present reality a simulation?
To some degree yes, but I also think we share the reality of our existence with its creator in some small way.  
·        Does free will exist, and how does consciousness arise from computation?
Yes. Consciousness arises from factors outside of the computation because of the ultimate interdependability of all things.  
Can we separate rules and principles in a way that is computable for AI?
No – we need human check points along the way.
·        How can AI navigate contradictions in human ethical systems?
It can’t- the best we can do is to have human check points along the way.

See what you think

Sunday, March 19

9 - Strange Music: Song of Norway Full Quality Recording



The song "Strange Music" is from the "Song of Norway" which captures the beauty of the music composed by Edward Grieg.  It was common place for many operetta's of that era to be based on the musical compositions of prior famous composers. 

The operetta is the story about Grieg and his love for his childhood sweetheart, Nina, but the scheming opera diva Louisa Giovanni is besotted by Grieg and is determined to keep him away from Nina. So she insists on him accompanying her as her pianist on world tours. But only through the love of his friends does he come to realize that Norway is where he will find true happiness. Grieg's role in this recording is sung by British operatic baritone Donald Maxwell. Maxwell has performed with all of the leading opera companies and additionally he has given many outstanding concert recitals and has been widely broadcast on radio and TV. If you don't like this recording you don't like icecream.        
 

Tuesday, March 7

Tipping point

Good news continues to emerge on energy use and we may be reaching a tipping point where world CO2 emissions globally start to decline. Although too early to be sure the signs are very encouraging. 

One pleasing aspect is China, which experienced the largest increase in energy productivity - by 133%. This means for the 15 year period from 1990 to 2015 the amount of energy used per unit of GDP (Note 1) has been rapidly declining. Put another way total emissions can reduce notwithstanding continuing increases in GDP. In other words China can simultaneously drastically reduce total carbon emissions yet still grow at a decent clip.   

China presently accounts for 30 per cent of global carbon emissions, but pledged to cap its emissions by 2030 at the Paris summit. 
But according to more recent data from Earth Systems Science, indicative of far less coal usage, the likelihood is this cap may already have occurred. 

This study published by Earth System Science Data contends global CO2 emissions from fossil fuels and industry is projected to grow by just 0.2 per cent this year. What their more recent data shows is that world emissions remained constant at 36 billion metric tonnes over the past 2 years despite increases in GDP.

Hence the worrying nexus between economic gains, (more particularly in China) and inevitable emissions growth has been severed. Another important point to note is that regardless of government policies both individuals and business alike are increasingly moving to carbon abatement or it's elimination. 
In fact for most boards of directors of larger entities there is concern over the sustainability of any investments reliant or supportive of fossil fuels. The fear is of a future class action by shareholders.

It doesn't mean one can become complacent, since the build up in CO2 continues, but it does mean we may have reached a tipping point to the extent total emissions could  fall heavily in the decades ahead to avoid the catastrophic outcomes predicted.     

Global energy intensity continues to decline
graph of world energy intensity, as explained in the article text
Source: EIA, International Energy Outlook 2016, International Energy Statistics, and Oxford Economics
Note: OECD is the Organization for Economic Cooperation and Development. (1) GDP is gross domestic product . In other words all the goods and services produced in an economy.