Wednesday, May 10

Lake Tyers










In April we visited Lake Tyers, a 4.5 hours’ pleasant drive from Melbourne which is located just past Lakes Entrance in East Gipsland. The pictures above are of nearby Nowa Nowa, one of the lakes, the homestead and 90 mile beach.   
East Gipsland boasts 11 world class Coastal Parks and Reserves with 400 square kilometers of lakes to make it Australia’s largest inland waterway. Housing a wide variety of wildlife there are 200 species of birds and the marine life includes dolphins and pelicans. The Ninety Mile Beach is one of the longest uninterrupted beaches in the world, which faces Bass Strait and backs on to the extensive network of Lakes.  
Lake Tyers Beach, is on the south-eastern shore of Lake Tyers, close to where the Lakes outlet enters the sea. Lake Tyers is a river valley separated from the ocean by only a thin strip of sand dunes, whose outlet remains closed by a wide sandbar, except for occasional overflows after very heavy rain.
 
Although we did not participate in the walks organized by our friends from the bush walking club there was evening meals and coffee stops which provided many opportunities to share experiences about the day’s events.
One member from the group who went for a morning brisk walk along the beach encountered a pair of emus who maintained a steady gait ahead at a respectable distance until finally disappearing into the sand dunes. On another occasion a member’s nephew, whose interest is conservation, gave a fascinating account of local bird life. He shared his local experience from his app as we listened to about 50 different species of bird calls he had encountered, identifying reasons and pitches applicable to each of the species. The following day we shared in a short walk with him as he identified the many diverse plants along the way.
 
Early history
The site was first visited by Europeans in 1846, named after C.J. Tyers who was then Commissioner of Crown Lands.
In 1861 an Aboriginal mission station was set up on the northern shore to farm crops, fruit trees and sheep. These agricultural pursuits were complemented by traditional hunting and fishing.
 
Closure of the mission station was proposed in the 1960s and many Aborigines moved to surrounding towns, particularly Nowa Nowa.
However, in 1970 the station was transferred to Aboriginal ownership. Although the school closed, other community services have been established and farming activities have been successfully extended.
Recent developments
Under the Native Title Settlement Act 2010, the first signed agreement was made with the Traditional Owners, the Gunaikurnai nations. The agreement was between the Gunaikurnai Land and Waters Aboriginal Corporation (GLaWAC), representing Traditional Owners whereby the Gunaikurnai people undertake joint management of 10 parks and reserves in consultation with Parks Victoria.  There are more than 600 Traditional Owners, all of whom have proven their ancestral links to one of 25 Apical Ancestors registered in the Native Title Consent Determination. Pictures below are of the settlement to day.
p7 p3
p5
 
 
p6
More local history
A school and church was constructed in1878. In 1886, a Hotel and grand guest house to seat 120 guests was established. Visitors enjoyed established forest walks, fishing and lake boating trips, Accessibility was enhanced once a coach service began from nearby Orbost.
The principal industry from the 1890s, was timber which was transported to Lakes Entrance by tramway. The logs were then floated across the lake to various Sawmills dotted around the lake until the late 1940s. Production gradually fell way until in 1972, 5300 hectares of remaining forest surrounding the lake were proclaimed a Forest Park.
The first industry was a glass factory established in 1908, using local quartz sand and manufacturing cups, bottles and ornaments before the contracts were lost and the factory was abandoned in 1912. Later a school and another guest house was constructed.
Postwar period
A small number of housing blocks and estates with subdivisions ensued so that the local community and holiday homes were soon sufficient to support a general store. By the early 1960s boat ramps and jetties were constructed to cater for the growing number of holiday makers.
Today there are just two caravan parks, a general store, a hotel-motel and some holiday accommodation. The school had only 26 pupils in 2014. Lake Tyers Beach might be aptly described as still a small very quiet holiday location with a number of permanent residents.
What attracts most visitors are the fishing and boating on Lake Tyers. There is also the opportunity to hire boats or go on scenic lake cruises.
Nowa Nowa,
One day we met up at Nowa Nowa, which is about 20 km north of Lakes Entrance and whose early pioneers harvested timber which was shipped to local sawmills. Nowa Nowa served for a couple of years from the 1890s as a location for the Tambo shire council. However settlement did not eventuate until the 1900s, with a school and railway which breathed new life to the fledgling community, followed by a road and bridges. By the1930s there were six sawmills and a local football team, a rifle club, a Country Women's Association branch, a thriving community of stores and a hotel.
Today there are about 140 residents in this township.
We enjoyed the township and lake and the Nowa Nowa gallery where there was displayed the root system made into a sculpture of Messmate – Eucalyptus obliqua. The tree grew in one metre of sandy loam on top on a limestone shelf. Where the root failed to penetrate the limestone, the roots grew laterally. Its age was estimated to be up to 300 years old and width of the root system was 7 metres.
Local historical site
Another excursion was to Nyermilang Heritage Park which was once a holiday retreat for a wealthy Melbournian, but now is in public ownership. The gracious old homestead of the 1920’s era is open to the public surrounded by an extensive garden and 5 walking tracks.    

Saturday, April 8

A Streetcar Named Desire - "I can smell the sea air" - Renee Fleming




Famed operatic soprano Renée Fleming, who has in my opinion one of the finest  soprano voices, (apart from late Joan Sutherland) of the modern era, has announced her retirement from opera. She will however continue to  give concert performances. 

A Streetcar Named Desire is an opera composed by Andrew Previn and is based on the play by Tennessee Williams. The Opera was created specifically with Fleming in mind whose exquisite upper register and final pianissimo is evident in the above aria" I can smell the sea air". The aria speaks to the final moment when Blanche is to be removed to an institution. The words are taken directly from the play but Fleming aptly portray her emotional fragility in her performance on stage.  Although Blanche has withdrawn to a inner world of make believe there is also the hint of inner strength and hope for the future to sustain herself encouraged by a memory of the sweet fragrance of the sea air.      



Saturday, March 25

A New Kind of thinking


"A New Kind of Science," by Stephen Wolfram was a book I read over ten years ago just after it was first published.    
Stephen Wolfram is an interesting person who was born in London and educated at Oxford, who gained a Ph.D. in theoretical physics at only 20 years of age. In the 1980’s he made a series of discoveries that he posited yielded new insights into physics, mathematics, computer science, and biology to shed new light on how everything tends to work in our universe. Although his work did initially attract some criticism  within the scientific community it seems his work ultimately has provided food for thought on conceptual  puzzles and longstanding issues in philosophy. 

In 1986 he founded Mathematica, which has become one of the world’s leading software systems for technical and symbolic programs.

The book represented 10 years of research and would be of interest to both scientists and non-scientists alike. The use of graphics further enhances the presentation and a reader’s easy grasp of his conclusions.  
The basis of his experiments are in the observance of behavioural output of simple program instructions or rules. By way of example we are all familiar with the operation of software that produces our pay, invoicing records or keeps vast quantities of information stored in virtual reality in what is described as the “cloud”.

 Wolfram wrote simple programs to observe their outputs. He used black and white squares but you could use any items –blue and white shaped beads if they took your fancy, since it was only the output behaviours of those cells that is the subject of the experiment. Hence he calls the output cellular automata.

His early experiments yielded results markedly different to what he had anticipated so he became interested in this phenomena which is encapsulated within the pages of this book.
To reiterate however these computer programs can be described more accurately simply as rules. The programs tell the computer to carry out some instructions for a specified interval. There is no intention to achieve a result other than to see what happens. He begins with basic programming rules and builds up to a very mildly complex instruction.

E.g. the so-called rule 30.

An elaboration is as follows: Start with a single black square, and repeat the rule line by line e.g. first, look at each cell and its right hand neighbour. If both of these were white on the previous step, then take the new colour of the cell to be whatever the previous colour of its left-hand neighbour was, otherwise, make the new colour the opposite of that.

Your intuition would tell you if you followed such a programming rule there should be some sort of repetitive pattern to appear in the cellular data output, since the same rule is being applied over and over again.

The effect of this program after 1,500 steps involving 2 million cells is there are no signs of any regularity and the pattern obtained seem to continue to evolve.

 Hence the book of 800 pages involves hundreds of experiments demonstrating this principle. The subject of cellular-automata and its principles has been debated in a variety of forums from philosophy to how better to sustain ecology systems to their existing prior condition before global warming. For a reference to how it is applied to Ecological Modelling click on the reference as per below: . 

Application to eco systems.   

From a philosophical stance a paper from Stanford University introduces the idea of the “Hat experiment” to explain how it works.   

Reference : Bert, Francesco and Tagliabue, Jacopo, "Cellular Automata", The Stanford Encyclopedia of Philosophy (Spring 2017 Edition), Edward N. Zlatan (ed.), URL = https://plato.stanford.edu/archives/spr2017/entries/cellular-automata/)

An example of cellular automata as in Think of Fig. 1 as standing for the front row of a high school classroom. Each box represents a student wearing (black) or not wearing (white) a hat. Let us make the two following assumptions:

Hat rule: a student will wear the hat in the following class if one or the other—but not both—of the two classmates sitting immediately on her left and on her right has the hat in the current class (let us say that if nobody wears the hat, then a hat is out of fashion; but if both neighbors wear it, it is too popular to be trendy).

Initial class: during the first class in the morning, only one student in the middle is wearing the hat
What happens as time goes by (consecutive rows represent the evolution in time through subsequent classes):
What happens is surprising. The complex evolutionary pattern displayed contrasts with the simplicity of the underlying law (the “Hat rule”) and ontology (for in terms of object and properties, we only need to take into account simple atoms and two states. In a sense, though, the global, emergent behaviour of the system supervenes upon its local, simple features. The scale at which the decision to wear the hat is made (immediate neighbours) is not the scale at which the interesting patterns become manifest.
While somewhat artificial, this example is a paradigmatic illustration of what makes CA appealing to a vast range of researchers: “even perfect knowledge of individual decision rules does not always allow us to predict macroscopic structure. We get macro-surprises despite complete micro-knowledge” (Epstein 1999, p. 48). Since the notion of emergence and the micro-macro interplay play such an important role in science and philosophy (see the entries on supervenience and emergent properties; for a sample of scientific applications, see Mitchell 2009, pp. 2–13, Gell-Mann 1994, Ch. 9), it has been suggested that many scientific as well as conceptual puzzles can be addressed by adopting the CA perspective. One of the leading thinkers in the field, Stephen Wolfram, has gone as far as claiming that CA may help us to solve longstanding issues in philosophy:

Hence Wolfram’s experiments show very complex systems can be built up from very basic underlying instructions or beginnings. During the course of the book he shows the implications to many fields of current knowledge.

Fast forward to Wolfram’ s blog and you get further insight into more recent thinking which I have reproduced here. 
A Short Talk on AI Ethics


Click on the link for graphical examples of cellular automata.  
October 17, 2016

Last week I gave a talk (and did a panel discussion) at a conference entitled “Ethics of Artificial Intelligence” held at the NYU Philosophy Department’s Center for Mind, Brain and Consciousness. Here’s the video and a transcript:
Thanks for inviting me here today.
You know, it’s funny to be here. My mother was a philosophy professor in Oxford. And when I was a kid I always said the one thing I’d never do was do or talk about philosophy. But, well, here I am.
Before I really get into AI, I think I should say a little bit about my worldview. I’ve basically spent my life alternating between doing basic science and building technology. I’ve been interested in AI for about as long as I can remember. But as a kid I started out doing physics and cosmology and things. That got me into building technology to automate stuff like math. And that worked so well that I started thinking about, like, how to really know and compute everything about everything. That was in about 1980—and at first I thought I had to build something like a brain, and I was studying neural nets and so on. But I didn’t get too far.
And meanwhile I got interested in an even bigger problem in science: how to make the most general possible theories of things. The dominant idea for 300 years had been to use math and equations. But I wanted to go beyond them. And the big thing I realized was that the way to do that was to think about programs, and the whole computational universe of possible programs.

And that led to my personal Galileo-like moment. I just pointed my “computational telescope” at these simplest possible programs, and I saw this amazing one I called rule 30—that just seemed to go on producing complexity forever from essentially nothing.
Well, after I’d seen this, I realized this is actually something that happens all over the computational universe—and all over nature. It’s really the secret that lets nature make all the complicated stuff we see. But it’s something else too: it’s a window into what raw, unfettered computation is like. At least traditionally when we do engineering we’re always building things that are simple enough that we can foresee what they’ll do.
But if we just go out into the computational universe, things can be much wilder. Our company has done a lot of mining out there, finding programs that are useful for different purposes, like rule 30 is for randomness. And modern machine learning is kind of part way from traditional engineering to this kind of free-range mining.
But, OK, what can one say in general about the computational universe? Well, all these programs can be thought of as doing computations. And years ago I came up with what I call the Principle of Computational Equivalence—that says that if behavior isn’t obviously simple, it typically corresponds to a computation that’s maximally sophisticated. There are lots of predictions and implications of this. Like that universal computation should be ubiquitous. As should undesirability. And as should what I call computational irreducibility.


Can you predict what it’s going to do? Well, it’s probably computationally irreducible, which means you can’t figure out what it’s going to do without effectively tracing every step and going through the same computational effort it does. It’s completely deterministic. But to us it’s got what seems like free will—because we can never know what it’s going to do.

Here’s another thing: what’s intelligence? Well, our big unifying principle says that everything—from a tiny program, to our brains, is computationally equivalent. There’s no bright line between intelligence and mere computation. The weather really does have a mind of its own: it’s doing computations just as sophisticated as our brains. To us, though, it’s pretty alien computation. Because it’s not connected to our human goals and experiences. It’s just raw computation that happens to be going on.

So how do we tame computation? We have to mold it to our goals. And the first step there is to describe our goals. And for the past 30 years what I’ve basically been doing is creating a way to do that.
I’ve been building a language—that’s now called the Wolfram Language—that allows us to express what we want to do. It’s a computer language. But it’s not really like other computer languages. Because instead of telling a computer what to do in its terms, it builds in as much knowledge as possible about computation and the world, so that we humans can describe in our terms what we want, and then it’s up to the language to get it done as automatically as possible.

This basic idea has worked really well, and in the form of Mathematica it’s been used to make endless inventions and discoveries over the years. It’s also what’s inside Wolfram Alpha. Where the idea is to take pure natural language questions, understand them, and use the kind of curated knowledge and algorithms of our civilization to answer them. And, yes, it’s a very classic Alisha thing. And of course it’s computed answers to billions and billions of questions from humans, for example inside Siri.
I had an interesting experience recently, figuring out how to use what we’ve built to teach computational thinking to kids. I was writing exercises for a book. At the beginning, it was easy: “make a program to do X”. But later on, it was like “I know what to say in the Wolfram Language, but it’s really hard to express in English”. And of course that’s why I just spent 30 years building the Wolfram Language.

English has maybe 25,000 common words; the Wolfram Language has about 5000 carefully designed built-in constructs—including all the latest machine learning—together with millions of things based on curated data. And the idea is that once one can think about something in the world computationally, it should be as easy as possible to express it in the Wolfram Language. And the cool thing is, it really works. Humans, including kids, can read and write the language. And so can computers. It’s a kind of high-level bridge between human thinking, in its cultural context, and computation.
OK, so what about AI? Technology has always been about finding things that exist, and then taming them to automate the achievement of particular human goals. And in AI the things we’re taming exist in the computational universe. Now, there’s a lot of raw computation seething around out there—just as there’s a lot going on in nature. But what we’re interested in is computation that somehow relates to human goals.
So what about ethics? Well, maybe we want to constrain the computation, the AI, to only do things we consider ethical. But somehow we have to find a way to describe what we mean by that.
Well, in the human world, one way we do this is with laws. But so how do we connect laws to computations? We may call them “legal codes”, but today laws and contracts are basically written in natural language. There’ve been simple computable contracts in areas like financial derivatives. And now one’s talking about smart contracts around cryptocurrencies.

But what about the vast mass of law? Well, Leibniz—who died 300 years ago next month—was always talking about making a universal language to, as we would say now, express it all in a computable way. He was a few centuries too early, but I think now we’re finally in a position to do this.
I just posted a long blog about all this last week, but let me try to summarize. With the Wolfram Language we’ve managed to express a lot of kinds of things in the world—like the ones people ask Siri about. And I think we’re now within sight of what Leibniz wanted: to have a general symbolic discourse language that represents everything involved in human affairs.
I see it basically as a language design problem. Yes, we can use natural language to get clues, but ultimately we have to build our own symbolic language. It’s actually the same kind of thing I’ve done for decades in the Wolfram Language. Take even a word like “plus”. Well, in the Wolfram Language there’s a function called Plus, but it doesn’t mean the same thing as the word. It’s a very specific version that has to do with adding things mathematically. And as we design a symbolic discourse language, it’s the same thing. The word “eat” in English can mean lots of things. But we need a concept—that we’ll probably refer to as “eat”—that’s a specific version that we can compute with.
So let’s say we’ve got a contract written in natural language. One way to get a symbolic version is to use natural language understanding—just like we do for billions of Wolfram Alpha inputs, asking humans about ambiguities. Another way might be to get machine learning to describe a picture. But the best way is just to write in symbolic form in the first place, and actually I’m guessing that’s what lawyers will be doing before too long.
And of course once you have a contract in symbolic form, you can start to compute about it, automatically seeing if it’s satisfied, simulating different outcomes, automatically aggregating it in bundles, and so on. Ultimately the contract has to get input from the real world. Maybe that input is “born digital”, like data about accessing a computer system, or transferring bitcoin. Often it’ll come from sensors and measurements—and it’ll take machine learning to turn into something symbolic.
Well, if we can express laws in computable form maybe we can start telling AIs how we want them to act. Of course it might be better if we could boil everything down to simple principles, like Asimov’s Laws of Robotics, or utilitarianism or something.
But I don’t think anything like that is going to work. What we’re ultimately trying to do is to find perfect constraints on computation, but computation is something that’s in some sense infinitely wild. The issue already shows up in Gödel’s Theorem. Like let’s say we’re looking at integers and we’re trying to set up axioms to constrain them to just work the way we think they do. Well, what Gödel showed is that no finite set of axioms can ever achieve this. With any set of axioms you choose, there won’t just be the ordinary integers; there’ll also be other wild things.
And the phenomenon of computational irreducibility implies a much more general version of this. Basically, given any set of laws or constraints, there’ll always be “unintended consequences”. This isn’t particularly surprising if one looks at the evolution of human law. But the point is that there’s theoretically no way around it. It’s ubiquitous in the computational universe.
Now I think it’s pretty clear that AI is going to get more and more important in the world—and is going to eventually control much of the infrastructure of human affairs, a bit like governments do now. And like with governments, perhaps the thing to do is to create an AI Constitution that defines what AIs should do.
What should the constitution be like? Well, it’s got to be based on a model of the world, and inevitably an imperfect one, and then it’s got to say what to do in lots of different circumstances. And ultimately what it’s got to do is provide a way of constraining the computations that happen to be ones that align with our goals. But what should those goals be? I don’t think there’s any ultimate right answer. In fact, one can enumerate goals just like one can enumerate programs out in the computational universe. And there’s no abstract way to choose between them.
But for us there’s a way to choose. Because we have particular biology, and we have a particular history of our culture and civilization. It’s taken us a lot of irreducible computation to get here. But now we’re just at some point in the computational universe that corresponds to the goals that we have.
Human goals have clearly evolved through the course of history. And I suspect they’re about to evolve a lot more. I think it’s pretty inevitable that our consciousness will increasingly merge with technology. And eventually maybe our whole civilization will end up as something like a box of a trillion uploaded human souls.
But then the big question is: “what will they choose to do?” Well, maybe we don’t even have the language yet to describe the answer. If we look back even to Leibniz’s time, we can see all sorts of modern concepts that hadn’t formed yet. And when we look inside a modern machine learning or theorem proving system, it’s humbling to see how many concepts it effectively forms—that we haven’t yet absorbed in our culture.
Maybe looked at from our current point of view, it’ll just seem like those disembodied virtual souls are playing videogames for the rest of eternity. At first maybe they’ll operate in a simulation of our actual universe. Then maybe they’ll start exploring the computational universe of all possible universes.
But at some level all they’ll be doing is computation—and the Principle of Computational Equivalence says it’s computation that’s fundamentally equivalent to all other computation. It’s a bit of a letdown. Our proud future ending up being computationally equivalent just too plain physics, or to little rule 30.
Of course, that’s just an extension of the long story of science showing us that we’re not fundamentally special. We can’t look for ultimate meaning in where we’ve reached. We can’t define an ultimate purpose. Or ultimate ethics. And in a sense we have to embrace the details of our existence and our history.
There won’t be a simple principle that encapsulates what we want in our AI Constitution. There’ll be lots of details that reflect the details of our existence and history. And the first step is just to understand how to represent those things. Which is what I think we can do with a symbolic discourse language.
And, yes, conveniently I happen to have just spent 30 years building the framework to create such a thing. And I’m keen to understand how we can really use it to create an AI Constitution.
So I’d better stop talking about philosophy, and try to answer some questions.

After the talk there was a lively Q&A (followed by a panel discussion), included on the video.  Some questions were:

Conclusion
I have inserted my own brief answer to the following philosophical questions but it would be interesting to know what others think.

When will AI reach human-level intelligence?
My own view is although I concede such things as quantum computers may reach the same speed of computation I don’t believe AI can ever reach the human brain level of intelligence.
:       Do we live in a deterministic universe?
In big scale physics yes, but concurrent to that I believe there exists a causality at the smaller end of the scale as is evident in a certain creative freedom.
·        Is our present reality a simulation?
To some degree yes, but I also think we share the reality of our existence with its creator in some small way.  
·        Does free will exist, and how does consciousness arise from computation?
Yes. Consciousness arises from factors outside of the computation because of the ultimate interdependability of all things.  
Can we separate rules and principles in a way that is computable for AI?
No – we need human check points along the way.
·        How can AI navigate contradictions in human ethical systems?
It can’t- the best we can do is to have human check points along the way.

See what you think