Code.org Advertisement and no-WFH

Recently code.org publicized a promotional video featuring ppl like Mark Zuckerberg of Facebook and Bill Gates of Micro$oft saying American schools should teach programming more.

 

I don’t like it.

 

I don’t think programming is for everyone and that more programming is for social good or scientific advancement. It lowers cost of labor for all those people in the Advertisement, but it isn’t as good as it sounds.

 

As a person who completed a CS degree, I feel that computer language can be made much better so that there won’t be a “computer programming”

 

The day that I tried to teach my dad to program a for-loop in C and he turned around and teased me about forgetting the closed form expression for arithmetic series was the first time that I thought about how stupid this stuff I do is. It was the expression on my dad’s face… I remember it vividly… For it was then that I realize that I did not comprehend the sheer vulgarity of

for(int x=0;x<100;++x);

so primitive, so stupid.

The next time is when I read about Map-Reduce–sooo freaking cool. I think tomorrow I will find another way to think, another way to say, and another way to program.

 

I want to make a better programming language. a better computer. That would be better than community colleges teaching Fortran IMHO

 

Oh, and p.s.

I think Yahoo!’s new no-policy is nice. I think is real progress for protection of civil liberty in America. Technology companies insists on ownership and monitoring of its employees while working, and admittedly justified to do so. Therefore when Marissa Mayers decided to cancel all WFH, she made a call that will end monitoring of employees’ home networks–because if you don’t work from home, the company will have no cause to instrument any kind of monitoring of your home network.

I think this is a really forward thinking technology leader who care about her employees. I am buying myself some Yahoo! stocks in support of this bold move.

The Ethical Hiearchy III

Recall, from last time, this illustration of the Ethics Hierarchy overlaying capability sets in the space of all transitive actions:

gold versus silver 1

I should simplify terminology. The set labeled “Things I want” are “my desires”, “things Jesus wants me to do” is the Jesus way, “Things Confucius wants me to do” the Confucian way, “Things I can do” are “my strengths” and “Things can be done to me” are “my weaknesses”.

 

I should also like to begin referring to what I have been calling the Ethics Hierarchy as the Moral Hierarchy. My own postings exhibit cultural bias. I include more from eastern culture than western culture. Some comments I have received indicate that there are others out there who have looked at culture/art/literature comparisons with the opposite bias. The fact of matter is, this Moral Hierarchy itself does not imply absolute superiority of any kind. Relatively speaking, one is larger, contains more transitive actions, than the other, but bigger is not necessarily better. In fact, it is one of my hopes to understand how they are different. Reasonably speaking, I should not expect to find that one is superior to the other–quite the opposite, I feel that exhaustive investigation of this subject will reveal to us more about the way the world, humans, and our society are than about the rights and wrongs within their contexts. Because morals have cultural biases and ethics is the philosophical study of morals, I may switch back to Ethics Hierarchy when I wish to emphasize that I am trying to be objective.

 

Therefore, to continue, let us be fair, what is drawn are idealized sets and intersections. The Jesus way is actually one of many allowed sets of actions that has an inner-bound restriction of being bigger than the my desires. Under this prescription alone, one can do everything in the universe and still not violate the Golden rule. The Confucius set, similarly is one of many possible sets fully contained in my wants set. The Silver rule has a maximum outer bound, one can only do things within my wants. Under silver rule, one cannot do everything in the world.

 

Some extreme examples might be, for instance, a person that goes around slaughtering each person with a knife is allowed under one interpretation of the Jesus way, as long as he also does everything that he wants for that person. This can be quite arbitrary, say, the perpetrator wants to be fed carrot cake, then he feed everyone carrot cake and then knifes them. What’s worse is if the person is masochist, then he is forced to act as sadomasochist. If he wants to be fed carrot cake and knifed, then Jesus way requires that he _must_ both feed everyone carrot cake and knife them.

 

On the other extreme, suppose one tries to follow Taoist suggestion to do nothing, it easily fall within Confucian way without regard to the size of my desires. While the only way for a person to do only nothing under the Jesus way is for him to want nothing. This is impossible because follower of the ways of Jesus at least want to enter heaven, so trivially Jesus way is never empty and prevents follower from doing nothing.

 

It’s interesting to think of the possibilities. Let’s look at just the Confucian way. Set D is outside of my strengths, and it is outside of my weaknesses, however, because I desire it, it is within the Confucian way. Set A are things I desire, within my strength to do and outside of my weakness to receive. These are the things that I can only give and will never receive in kind. On the opposite end, set C are within my desires and weaknesses but not within my strengths. O is the set of my opportunities–these are the things I want but am not yet capable of receiving. Set B is a sweet spot. Here, not only are we within the ways of both Confucius and Jesus, we also desire to do so. This is a region to maximize, if we had the choice to do so.

 

gold versus silver 3

 

It should be pointed out that zone B contains only actions that we can reciprocate when receiving and receive reciprocation when giving–in kind–in other words, an eye for an eye, a tooth for a tooth zone.

 

Zone U in this graph points out an area of the an-eye-for-an-eye-a-tooth-for-a-tooth zone that is outside of my own desire and therefore not recommended by either Confucius, but is allowable by the Jesus way.

 

Let’s backtrack and admire the an-eye-for-an-eye-a-tooth-for-a-tooth zone T in its full glory

 

gold versus silver 2

 

Wow! It does exists!

With Higher Knowledge Come Higher Responsibility

The other day, at work, (and by now you know I work for a Japanese Automotive Electronics company), we talked about autonomous cars for consumes. Since everyone is either technology freak or car freak the discussion was pretty intense.

 

I explained to every one the ethical issue surrounding autonomous cars that may be not be completely resolved or resolvable by technology.

 

The matter is this: an autonomous car will with absolute certainty be faced with a situation where it has to choose between two actions each will be killing a different person. Suppose two person suddenly dash in front of the car to the left and to the right, and suppose that the car is moving too fast to stop. it can veer to avoid one person with certainty. But which will it choose?

 

Another scenario: the car can brake very hard and avoid killing a pedestrian, but in the process it will have killed the passenger because the car is mechanically able to endure much higher de-acceleration than its occupants.

 

The legal problem also, if I configure the car, or if some car company configure the car to always protect its owner (rational), that I the owner, the designer, the manufacturer is then liable to be sued for killing people?

 

“But your honor, the car swerved!! I had nothing to do with it”

 

Okay, so the people who want autonomous cars (myself partially included), will say that with better equipment, high-speed video/audio recording and black-boxes, there might be far fewer arguments about who was responsible for accidents. But there are some things in our current law that are absolute. If a car hits a person inside the cross walk, the car is always responsible. If the car is rear-ended the car in arrear will be responsible. What will happen to these absolute laws that are in many circumstances unreasonable but serve to protect the safety of the population?

 

And finally, even if, and I believe it will, autonomous vehicles reduce death to 1% or less of today’s vehicle related death rate, that 1% where two person dash in front of the car, and the car has to choose, what then? Why is this so hard?

 

One of the big problems is informed decision is hard. The car, given today’s technology, machine learning technology for object detection, vision algorithms, radar, laser range scanner, eeg/ekg, EMR technologies can pretty reliably detect with plenty of time to choose which one to save, that there are two person dashing infront of the car one to the left, one to the right, velocity, estimated trajectory, mass, the certainty of these estimate and the margins of error (where else could each person likely be by the time we collide, etc.)

The reason human get away with killing in this situation is that we do not have the speed and ability. It is beyond our control–until we programmed a computer to do it, and then we are suddenly faced with choice that we never had to make before: kill left, kill right or maim both? or risk killing both? or kill myself to completely avoid  their injury?

Hmm, let’s see, What would Confucius allow? What would Jesus insist? Well, I don’t want to be killed, so don’t kill other people. I would want other people to save me so I would want to brake an save both crazy people. Hmm, I guess it really depends on the person’s desire. One would say a more moral person may not wish for another moral entity to suffer in exchange for his own sake, as well to exchange another’s life for his own. But by and large most people would ask the car to save himself no matter which place he is in.

The moral problem arise in that we are not in any of those three situations. We are in the autonomous car’s designer’s shoes. We are in Asimov’s shoes. What should we write as the laws of autonomous vehicles? When we know that at some point, the car will know almost certainly that it must kill/damage/disrupt someone/thing, and knows exactly which wire to send electric signal down to to choose which person. What should we tell the car to do?

Because soon the car will be looking at that scenario in slow-mo… with 10ms to decide and then 250ms seconds to turn the steering wheel left or right and apply brakes.

So, as you can see, the mere knowledge of morality and capability to choose encumber us with the responsibility of behaving morally. Because I know it’s wrong, I must not do wrong. Another person may think that the root of this evil is the fact that I know of this moral dilemma and that I have gained the speed to travel fast or gained the speed to determine people’s fate.

I wonder if the are right that those things are works of devil and that the absolute best moral thing to do is just to stay away from them? I should consider this carefully. What if I find that it is wrong for me to live? or wrong for me to blog about morality? What if it is found that internet is not moral? or god forbid that it is immoral to have stereo audio in cars? Because I already of the ability to terminate any one of them–at least for myself.

 

*shiver*

 

p.s.

I can accept an argument that placing one’s self into a situation where there is no moral choice is immoral. The autonomous car makers will insist that car drive carefully so that it will never be faced with 2 people in said situation. But somehow, science, technology–human inquiry–may find a way to inform us that that is just delusional, that it is provably impossible to avoid crazy human. 😉 back to square-one I suppose.