Monday, December 15, 2008

Goddamn ant macosx xml problem


[ chrisn chris-nuernbergers-macbook-pro ~/dev/editor/lambinator ] ./buildme.sh
Buildfile: build.xml

init:

compile_lambinator:
[java] Compiling lambinator.experiment to /Users/chrisn/dev/editor/lambinator/classes
[java] Compiling lambinator.ui to /Users/chrisn/dev/editor/lambinator/classes
[java] java.lang.ExceptionInInitializerError (ui.clj:1)
[java] at org.apache.tools.ant.taskdefs.ExecuteJava.execute(ExecuteJava.java:194)
[java] at org.apache.tools.ant.taskdefs.Java.run(Java.java:747)
[java] at org.apache.tools.ant.taskdefs.Java.executeJava(Java.java:201)
[java] at org.apache.tools.ant.taskdefs.Java.execute(Java.java:104)
[java] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:288)
[java] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[java] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
[java] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
[java] at java.lang.reflect.Method.invoke(Method.java:585)
[java] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:105)
[java] at org.apache.tools.ant.Task.perform(Task.java:348)
[java] at org.apache.tools.ant.Target.execute(Target.java:357)
[java] at org.apache.tools.ant.Target.performTasks(Target.java:385)
[java] at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1329)
[java] at org.apache.tools.ant.Project.executeTarget(Project.java:1298)
[java] at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41)
[java] at org.apache.tools.ant.Project.executeTargets(Project.java:1181)
[java] at org.apache.tools.ant.Main.runBuild(Main.java:698)
[java] at org.apache.tools.ant.Main.startAnt(Main.java:199)
[java] at org.apache.tools.ant.launch.Launcher.run(Launcher.java:257)
[java] at org.apache.tools.ant.launch.Launcher.main(Launcher.java:104)
[java] Caused by: java.lang.ExceptionInInitializerError (ui.clj:1)
[java] at clojure.lang.Compiler$InvokeExpr.eval(Compiler.java:2684)
[java] at clojure.lang.Compiler$BodyExpr.eval(Compiler.java:3631)
[java] at clojure.lang.Compiler.compile(Compiler.java:4564)
[java] at clojure.lang.RT.compile(RT.java:362)
[java] at clojure.lang.RT.load(RT.java:404)
[java] at clojure.lang.RT.load(RT.java:376)
[java] at clojure.core$load__4557$fn__4559.invoke(core.clj:3427)
[java] at clojure.core$load__4557.doInvoke(core.clj:3426)
[java] at clojure.lang.RestFn.invoke(RestFn.java:413)
[java] at clojure.core$load_one__4520.invoke(core.clj:3271)
[java] at clojure.core$compile__4563$fn__4565.invoke(core.clj:3437)
[java] at clojure.core$compile__4563.invoke(core.clj:3436)
[java] at clojure.lang.Var.invoke(Var.java:327)
[java] at clojure.lang.Compile.main(Compile.java:52)
[java] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[java] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
[java] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
[java] at java.lang.reflect.Method.invoke(Method.java:585)
[java] at org.apache.tools.ant.taskdefs.ExecuteJava.run(ExecuteJava.java:217)
[java] at org.apache.tools.ant.taskdefs.ExecuteJava.execute(ExecuteJava.java:152)
[java] ... 20 more
[java] Caused by: java.lang.ExceptionInInitializerError
[java] at com.trolltech.qt.QtJambiObject.(QtJambiObject.java:57)
[java] at java.lang.Class.forName0(Native Method)
[java] at java.lang.Class.forName(Class.java:164)
[java] at clojure.core$import__3583.doInvoke(core.clj:1600)
[java] at clojure.lang.RestFn.applyTo(RestFn.java:142)
[java] at clojure.lang.Compiler$InvokeExpr.eval(Compiler.java:2679)
[java] ... 39 more
[java] Caused by: java.lang.RuntimeException: Loading library failed, progress so far:
[java] Unpacking .jar file: 'qtjambi-macosx-gcc-4.4.3_01.jar'
[java] Checking Archive 'qtjambi-macosx-gcc-4.4.3_01.jar'
[java]
[java] at com.trolltech.qt.internal.NativeLibraryManager.loadNativeLibrary(NativeLibraryManager.java:428)
[java] at com.trolltech.qt.internal.NativeLibraryManager.loadQtLibrary(NativeLibraryManager.java:352)
[java] at com.trolltech.qt.Utilities.loadQtLibrary(Utilities.java:137)
[java] at com.trolltech.qt.Utilities.loadQtLibrary(Utilities.java:133)
[java] at com.trolltech.qt.QtJambi_LibraryInitializer.(QtJambi_LibraryInitializer.java:53)
[java] ... 45 more
[java] Caused by: java.lang.RuntimeException: Failed to unpack native libraries, progress so far:
[java] Unpacking .jar file: 'qtjambi-macosx-gcc-4.4.3_01.jar'
[java] Checking Archive 'qtjambi-macosx-gcc-4.4.3_01.jar'
[java]
[java] at com.trolltech.qt.internal.NativeLibraryManager.unpack(NativeLibraryManager.java:365)
[java] at com.trolltech.qt.internal.NativeLibraryManager.loadLibrary_helper(NativeLibraryManager.java:434)
[java] at com.trolltech.qt.internal.NativeLibraryManager.loadNativeLibrary(NativeLibraryManager.java:423)
[java] ... 49 more
[java] Caused by: javax.xml.parsers.FactoryConfigurationError: Provider org.apache.xerces.jaxp.SAXParserFactoryImpl not found
[java] at javax.xml.parsers.SAXParserFactory.newInstance(SAXParserFactory.java:113)
[java] at com.trolltech.qt.internal.NativeLibraryManager.readDeploySpec(NativeLibraryManager.java:496)
[java] at com.trolltech.qt.internal.NativeLibraryManager.unpackDeploymentSpec(NativeLibraryManager.java:521)
[java] at com.trolltech.qt.internal.NativeLibraryManager.unpack_helper(NativeLibraryManager.java:389)
[java] at com.trolltech.qt.internal.NativeLibraryManager.unpack(NativeLibraryManager.java:360)
[java] ... 51 more
[java] --- Nested Exception ---
[java] java.lang.ExceptionInInitializerError (ui.clj:1)
[java] at clojure.lang.Compiler$InvokeExpr.eval(Compiler.java:2684)
[java] at clojure.lang.Compiler$BodyExpr.eval(Compiler.java:3631)
[java] at clojure.lang.Compiler.compile(Compiler.java:4564)
[java] at clojure.lang.RT.compile(RT.java:362)
[java] at clojure.lang.RT.load(RT.java:404)
[java] at clojure.lang.RT.load(RT.java:376)
[java] at clojure.core$load__4557$fn__4559.invoke(core.clj:3427)
[java] at clojure.core$load__4557.doInvoke(core.clj:3426)
[java] at clojure.lang.RestFn.invoke(RestFn.java:413)
[java] at clojure.core$load_one__4520.invoke(core.clj:3271)
[java] at clojure.core$compile__4563$fn__4565.invoke(core.clj:3437)
[java] at clojure.core$compile__4563.invoke(core.clj:3436)
[java] at clojure.lang.Var.invoke(Var.java:327)
[java] at clojure.lang.Compile.main(Compile.java:52)
[java] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[java] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
[java] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
[java] at java.lang.reflect.Method.invoke(Method.java:585)
[java] at org.apache.tools.ant.taskdefs.ExecuteJava.run(ExecuteJava.java:217)
[java] at org.apache.tools.ant.taskdefs.ExecuteJava.execute(ExecuteJava.java:152)
[java] at org.apache.tools.ant.taskdefs.Java.run(Java.java:747)
[java] at org.apache.tools.ant.taskdefs.Java.executeJava(Java.java:201)
[java] at org.apache.tools.ant.taskdefs.Java.execute(Java.java:104)
[java] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:288)
[java] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[java] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
[java] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
[java] at java.lang.reflect.Method.invoke(Method.java:585)
[java] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:105)
[java] at org.apache.tools.ant.Task.perform(Task.java:348)
[java] at org.apache.tools.ant.Target.execute(Target.java:357)
[java] at org.apache.tools.ant.Target.performTasks(Target.java:385)
[java] at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1329)
[java] at org.apache.tools.ant.Project.executeTarget(Project.java:1298)
[java] at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41)
[java] at org.apache.tools.ant.Project.executeTargets(Project.java:1181)
[java] at org.apache.tools.ant.Main.runBuild(Main.java:698)
[java] at org.apache.tools.ant.Main.startAnt(Main.java:199)
[java] at org.apache.tools.ant.launch.Launcher.run(Launcher.java:257)
[java] at org.apache.tools.ant.launch.Launcher.main(Launcher.java:104)
[java] Caused by: java.lang.ExceptionInInitializerError
[java] at com.trolltech.qt.QtJambiObject.(QtJambiObject.java:57)
[java] at java.lang.Class.forName0(Native Method)
[java] at java.lang.Class.forName(Class.java:164)
[java] at clojure.core$import__3583.doInvoke(core.clj:1600)
[java] at clojure.lang.RestFn.applyTo(RestFn.java:142)
[java] at clojure.lang.Compiler$InvokeExpr.eval(Compiler.java:2679)
[java] ... 39 more
[java] Caused by: java.lang.RuntimeException: Loading library failed, progress so far:
[java] Unpacking .jar file: 'qtjambi-macosx-gcc-4.4.3_01.jar'
[java] Checking Archive 'qtjambi-macosx-gcc-4.4.3_01.jar'
[java]
[java] at com.trolltech.qt.internal.NativeLibraryManager.loadNativeLibrary(NativeLibraryManager.java:428)
[java] at com.trolltech.qt.internal.NativeLibraryManager.loadQtLibrary(NativeLibraryManager.java:352)
[java] at com.trolltech.qt.Utilities.loadQtLibrary(Utilities.java:137)
[java] at com.trolltech.qt.Utilities.loadQtLibrary(Utilities.java:133)
[java] at com.trolltech.qt.QtJambi_LibraryInitializer.(QtJambi_LibraryInitializer.java:53)
[java] ... 45 more
[java] Caused by: java.lang.RuntimeException: Failed to unpack native libraries, progress so far:
[java] Unpacking .jar file: 'qtjambi-macosx-gcc-4.4.3_01.jar'
[java] Checking Archive 'qtjambi-macosx-gcc-4.4.3_01.jar'
[java]
[java] at com.trolltech.qt.internal.NativeLibraryManager.unpack(NativeLibraryManager.java:365)
[java] at com.trolltech.qt.internal.NativeLibraryManager.loadLibrary_helper(NativeLibraryManager.java:434)
[java] at com.trolltech.qt.internal.NativeLibraryManager.loadNativeLibrary(NativeLibraryManager.java:423)
[java] ... 49 more
[java] Caused by: javax.xml.parsers.FactoryConfigurationError: Provider org.apache.xerces.jaxp.SAXParserFactoryImpl not found
[java] at javax.xml.parsers.SAXParserFactory.newInstance(SAXParserFactory.java:113)
[java] at com.trolltech.qt.internal.NativeLibraryManager.readDeploySpec(NativeLibraryManager.java:496)
[java] at com.trolltech.qt.internal.NativeLibraryManager.unpackDeploymentSpec(NativeLibraryManager.java:521)
[java] at com.trolltech.qt.internal.NativeLibraryManager.unpack_helper(NativeLibraryManager.java:389)
[java] at com.trolltech.qt.internal.NativeLibraryManager.unpack(NativeLibraryManager.java:360)
[java] ... 51 more


Solution:
sudo mv /usr/share/ant/lib/xercesImpl.jar /usr/share/ant/lib/xercesImpl.jar.back

Don't ask me how I figured that one out.

Sunday, December 14, 2008

Clojure Projects

I don't really know how to start this other than to look at various ways you can create functionality in a clojure project and attempt to categorize them in some productive way.

The first I want to talk about is the repl. Hopefully you have had the pleasure of working with a good repl setup (SLIME's is the best I have ever used, have you seen better?). REPL stands for read-eval-print-loop. It looks like a command prompt and you type in some stuff and immediately see what the result is. This is similar to when you break in a debugger and you can both analyze values *and* do edit and continue.

I am going to out on a limb here and state that the REPL is the fastest way to add new functionality. You type in a couple statements and you see, right after you type them if it worked or not. This is like Christmas. It is a fucking blast if you have never used it; trust me. It feels weird at first but you just have to use it for a couple days and you will be amazed at how much stuff you get to work. The rate you can add functionality to a new system is directly related to the rate that you can get feedback as to the correctness of this functionality. Typed languages use the static typing system to give you a little faster feedback but nothing means anything until the bits are actually moving; thus the repl is the best. It is more fun than diagnosing hindley-milner errors anyway.

OK. The problem with the repl is that you would have to type in your program all the time. So you have files where you can save the information that you can dynamically load into the repl. You can use either (load-file "fname") or you can use slime-compaile-and-load-file from emacs. This is almost as much fun as the repl and a lot more repeatable. The file is still dynamically loaded but at least you can replace sets of functionality at one, easily.

Next we can compile a file and put it into a jar file; or more generally some compilation unit. These are *not* dynamically loadable to my knowledge; thus if you want a program to run for a long time you will load the jar files once and then you are off. Hopefully the program is completely correct and never needs runtime updates.

Jar files are more easily reusable and distributable than stand alone text files. They also protect your IP to some extent although I really don't give a shit about that.

So we have three levels of adding code to the system, in order of ease of mutability. Repl, repl-loaded, and jar'd. So how do we set up a system that makes all this cool?

Well, lets say we have a set of .clj files. We want an ant project that will compile and jar up all of them. I will set this up in a second.

Now, we have to have this jar file path in our classpath somehow. You can either add it via (set-classpath) or you can add it to the java "cp" startup argument.

Now we find some function is wrong in this jar file. Can we dynamically replace this function in a running system? Well, assuming it is reference via 1 level on indirection then yes, we can. This is what the whole environment and "defn" system gets you; I am assuming. But this requires a test.

Basically, if I can create a jar file and then replace some of its functionality in the running system then I am golden; I can have fast, precompiled code *and* I can update the system and try out fixes and new ideas dynamically. I believe, due to the way that clojure's Vars work that I can get this stuff running. But, hope is not a strategy so lets try some of this out.

The first step is to make a jar file out of some clojure file. In the simplest case, I have a clojure file and has a single function in some namespace, and then I jar it up using ant. Next I add it to my classpath and load it into the repl, checking that it works. Finally I replace said function somehow using the repl.

There is one guaranteed caveat that was there in common lisp and it is here too. If you create a closure dynamically this won't be replaced if you load a new file. Thus you need to figure out what level of dynamic functionality you want and avoid closures where necessary, or create closures that immediately call into the namespace vars.

So here we go. Into the lambinator directory and create a subdirectory called src/experiment

OK, copied clojure-contrib's jar file (after actually getting it to produce useful output, you have to tell ant where clojure.jar is). Got the dynjar.clj file compiling into a class directory and having classes shoved into a jar file. At this point I am now as good at clojure as I ever was at java.

I have to update my ~/bin/clojure script to add this jar file to my classpath. I know this will change per project but emacs needs this command; I could perhaps have project-specific .emacs files but I am really not that ambitious.

In my test file, I have one function that returns an integer, and another function that returns the result of the first. Now the test is that if I load the jar file and mess around with the repl I should be able to get the second function to return a new value by replacing the first.

So:
a -> 5
b -> a -> 5

In repl, I will change a to return anything but 5. If this works, then I can dynamically update my running program with new stuff!

Lets see what happens:

Hell happened. Jar hell.

load-file does the equivalent of import and require on all the objects in the file. This was not quite what I expected.

I will update this post later when I have much more information.

Getting emacs working

Looked through the clojure-mode source for some good stuff:

unfortunately, clojure-load-file doesn't seem to work due to:
comint-get-source, and then after I start slime *inferior-lisp-proc*.

slime-load-file worked just fine. I wonder if there is a slime-load-all-out-of-date-files. That would be damn useful.

Some shizzle you should know

OK, so something I use a lot in other editors is bookmarking facilities.

So, in emacs we have two functions that I really care about: bookmark-set and bookmark-jump.

So, the idea is to set a point in a given file you know you will need later. You name these so they are unique.

A bookmark is a fully named entity, so these commands ask you for a name.

set: C-x r m

jump: C-x r b

The next thing I use all the time are registers if I have access to them. Basically, I want to select a bit of text and shove it into some named variable I will reuse in different contexts. Registers in emacs are single-character named entities you can put text (and other things) into and get it out of.

These commands will ask for a single character register name to place information into or get it from.

(select region then..)
copy: C-x r s
paste: C-x r i

In general, try to get by without using copy/paste. It *really* is the work of the devil. In any form.

OK, down to the basics. Remember this is aquamacs...
aquamacs-isearch-forward A-f
aquamacs-repeat-isearch , A-g
aquamacs-repeat-isearch-backward , A-G
isearch-backward C-r
isearch-forward , C-s


Don't forget basic navigation, either:
C-p
C-n
C-b
C-f

Finally, I put these two lines in my .emacs file.

(global-set-key [f4] 'slime-compile-and-load-file)
(global-set-key [f3] 'find-name-dired) //This one is *super* useful in larger projects

Trying out the various slime functions, they don't appear to work very well. The awesome ones are 'slime-who-calls and stuff like that; but anyway...

Well, we have hit the point of somewhat diminishing returns. Thus in the next post we will look at the next steps of a non-trivial clojure project.

Saturday, December 13, 2008

Clojure App dev, step 1 -> repl-QT-repl-QT

The goal of this step is to ensure you can start a qt frame from the repl, switch to qt's event handling (thus leaving the repl) but get back to the repl using a button.

This is critically important; I need to be able to run the application and then update it dynamically *without* restarting anything. I can't tell you how many times, while debugging something like mouse picking or moving objects around in 3d space I had to break into the debugger, then check out what is going wrong. Next I would make an attempt to fix, recompile, reload the presentation and then repeat. Over and over and over and fucking over again. And again. What I would like is that I can see a given function isn't working correctly; I just update the function definition *while I am running the editor*. This is goddamn important. With this working complex, really useable features are much easier to get working correctly.

OK, so here we go!

It should look like:

(start from repl)
open qt frame with one button, return to repl.
switch to qt's event handling thus effectively leaving the repl
push a button and return to repl *without* closing frame
add a menu item to the existing frame, thus ensuring we can mutate the datastructure and get a good update
exit main frame; thus going back to repl.
see what the state of the main frame is; we may be able to open it

This relies on one key assumption: that I can exit qt's event handing without closing the window. I believe I can but I don't know.

If this assumption doesn't hold then I can always store, latently, the commands for creating the UI. Then when I want to repl around with shit I can destroy the entire UI, update the commands to add new elements, and re-create. This is heavy-handed and rude but it might work. It will remove perhaps 30-40% of the functionality that want, however so I will really try to avoid this.

Reading the QT documentation, it looks like you call QApplicationCore.exec() to get things going and then you can call QApplicationCore.exit() in order to return from the event loop. So your return-to-repl button should just call exit, perhaps. There is also a processEvents call that you could call. Essentially, you could loop over calling process events and have your button set a global that tells you to stop. But if exit works then that is the sweetest.

Simple clojure QT example

This should get me started. There is something really weird when you are learning a new language. You don't know how to create a hash, or vector, and it seems every character you type is wrong. It takes a couple days before you (or perhaps just I) get things going. On the other hand, this level of comfortability with the language has benefits. You focus far more on algorithm than details; this leads to better code.

For example, if you lived in C for all your life and was asked to do something one of the first things you would visualize or plan out would be the memory access system because the most complex part of most C programs is based around memory handling; at least as much as what the program is meant to do. It takes a few other languages with garbage collection before you begin to think at a higher level and then work down to the memory level if you have to.

Side tracked:

BillC figured this all out first
This too. It hasn't worked for me yet, though

Well, this blows. I have been trying to diagnose an error for quite some time now, it looks like:

[Thrown class java.lang.ExceptionInInitializerError]

0: com.trolltech.qt.QtJambiObject.(QtJambiObject.java:57)
1: java.lang.Class.forName0(Native Method)
2: java.lang.Class.forName(Class.java:169)

Checking shit out, it isn't immediately clear what is going on. Time to research how to debug this error....

The demos run just fine, so I know it is possible to run applications. Trolltech -helpfully- included a binary starter to the demos so I can't easily see what is going on.

Holy shit that hurt!

Why God Why Doesn't It Work?!?1!1?

**Don't use JDK 6**

Nice, 2 hours of fucking around with java madness. I finally figured out the magic google code that would find the problem exactly:

java.lang.UnsatisfiedLinkError libQtCore.4.dylib

So I need to amend my other post.

Some more things are working. It appears that QApplication.exit actually closes all open widgets. This isn't exactly what I want. So I will just call process events from a loop and have a button set a variable to break out of the loop.

Thus I would like to be able to define a variable in the namespace that is something like "process_events_running" and have the single button in my application set it to false. Then I will provide a custom exec function that calls process in a loop checking that variable.

The simples way to do this would be a closure. I don't immediately see how to do this, but I remember rich hickey stating that lambdas are "ICallable" or something like that. I know that the signal system in QTJambi uses reflection, so if I just pass in a closure as the "this" argument to connect, and pass in the function named call on the icallable I may be able to get somewhere. The interface is Callable:

user> (lambda `(println "hello"))
; Evaluation aborted.
user> (fn [] (println "hello"))
#
user> (set x (fn [] (println "hello")))
; Evaluation aborted.
user> (def x (fn [] (println "hello")))
#'user/x
user> (. x call)
hello
nil
user> x
#
user> (x)
hello
nil
user>

Still not finished, but I need some food.

OK, nice microwave pizza and I am back.

So, passing in a closure to connect works fine. I do it like this:

(defn exec []
(def exec_var 1)
(while
(== exec_var 1)
(QApplication/processEvents)))

(defn create_app_frame []
(ensure_app_init)
(let [app (QApplication/instance)
button (new QPushButton "Go Clojure Go")]
(.. button clicked (connect (fn [] (def exec_var 0)) "call()"))
(doto button
(.resize 250 100)
(.setFont (new QFont "Deja Vu Sans" 18 (.. QFont$Weight Bold value)))
(.setWindowTitle "Go Clojure Go")
(.show))
button)) ;return the button for further reference

Now I need things to sleep because I am chewing up CPU by calling process events over and over again. There is a hasPendingEvents call; so now all I need is how to make the system sleep in a platform independent way. Java has Thread.sleep in the language and that is that.

I didn't mess with the datastructures of the frame yet (mainly because I just have a button). But I have a QT app, running from the repl and most importantly returning to the repl at the push of a button. I had to do a lot of work for this first app; get emacs working, run up against problems with the mac java implementation interacting with QT, and learn a little bit of clojure (which was the best part). The benefits are huge, though, because QT is a good platform to move forward on and because clojure is an order of magnitude more powerful than java; and I personally believe that developing from the repl is much more powerful than developing from a compile/run standpoint.


At this point, I would love to upload all files related to this. I can't, so I started a github project where I will put all the code.
Git R Dun

OK, to quickly review what you will need to get shit working:

QT - I had version 4.4.3_01
java - 1.5.0
aquamacs emacs
git,svn,cvs, and the latest versions of:
(svn from various other places)
clojure
clojure-contrib

(from joshu's git repository)
clojure-mode
swank-clojure

(cvs. I wish this project had some better regression testing systems)
slime

Watch every single presentation here:
Rich Hickey

Watch them again until you really get it.

Next up will be an emacs post; I need to remember how to use/navigate within emacs and how to integrate with slime better.

Java application development

When writing a new application in Java you really have several choices. You can use the SWT, swing, and perhaps QT to name a few. On Linux, you can probably use some GTK wrapper but I don't know.

Everyone wants extensible applications, but what does this really mean? The answer to this question is kind of important. There is eclipse-style plugin architecture; plugin with a capital P. You will not write a better plugin architecture than this. You may write one that is more minimal that suites just exactly your needs but it won't be better for sure.

I am stuck at this point. I would in some cases like to build an application off of the eclipse system but there is something about it I don't like; mainly the steep learning curve. My application will not look like eclipse and I am not certain it should use the workspace API. If you don't use the workspace plugin as the basis for your application, then the way I see it most of eclipse isn't really that necessary.

I am not certain I like how the SWT looks, or how responsive (or otherwise) it is. Swing is out of the question because I am going to use openGL. QT is my absolute top pick, but then I have to design the application framework myself.

The question is it worth it to spend serious time learning the eclipse application development foundation *and* figuring out how to get it working in a live coding sense, or should I just mess with QT?

It really bothers me, more than I would like. I have built applications before; the framework (the command architecture, input handling, the UI sub-structure) is really pretty detailed and takes a bit of effort. If you want chording inputs like emacs then you have to do a lot more work but that is honestly the only thing that makes sense. Now you want internationalization? That will take a bunch more work, something else eclipse has already solved well.

On the other hand, if you want great user-interaction; arguably eclipse does not have this solved. I don't like using eclipse; I always get into weird states with it and it really is slow. I would take visual studio over anything in most cases, followed closely by vim or emacs. Eclipse is a distant, distant choice I would only use if it really made sense. Perhaps this is ignorance on my part; but it is never enjoyable for me learning a new, large, non-trivial IDE. Visual studio crashes like it was born to but it is still pretty quick to do things with.

Actually, thinking about it I guess I could try to write some clojure extensions for eclipse before I jump into doing things with QT. The problem is that the UI lib is only 10% of the problem space that I don't care about any more. Everything else I mentioned is also a part of the space I don't want to solve in some amateur, one-off fashion.

But this is the thing. I am not going to use a system that doesn't support live coding. This means either I can launch it from the repl and come back to the repl by pushing a button (thus exiting the application's even handling) *or* I need some repl abilities *in* the app itself.

If I went with QT, I could run everything from the emacs-slime repl. If I go with eclipse, I will need to first build and extension that works with the repl.

Following my heart at this point means QT. Following my head means eclipse.

I guess that answers the question. I will mess with QT and hopefully come up with stuff that could be moved to eclipse should the need arise.

Clojure on mac osx

1. Download java 6
--EDITOR Do not do this if you want qtJambi to work on MacOSX.
--This is because there are only 64 bit java 6 implementations for OS X, and
--Trolltech have not released 64 bit compatible libraries *yet* for OS X. They don't appear
--to be all the close at the moment, either.
--You need to use 1.5.0, which is probably installed by default *if* you are using 10.5.
--EDITOR

go to here and link the current and currentsdk to 1.6.0
/System/Library/Frameworks/JavaVM.framework/Versions/

2. Set up aquamacs with slime:

http://groups.google.com/group/clojure/msg/ecbf7f87343d7f3f

Make sure to get both swank-clojure and clojure-mode from joshu's git repository

Be prepared to update the four of: clojure, swank-clojure, clojure-contrib, and slime so be sure you are get subversion/git/svn versions and can easily go into a directory and get fresh ones.

Yay! I have slime and clojure up and running!

That really wasn't very hard, now I can mess around with clojure a bit and try some things out. It is sweet, open up emacs and press f5 and there you go! I saw some errors flash by at one point but right now I don't care, the repl appears to be working (along with tab-completion of function calls).

Except now I have to remember all the emacs tricks that I have converted to vim tricks and forgotten about...

Why are emacs key bindings so crazy?

Tuesday, December 9, 2008

What is Eclipse?

When you think of Eclipse (I will refer to it as eclipse) you probably think of a large, corporate java editing platform. This is what it has become but it isn't what it is at its heart.

The core piece of eclipse is a plugin framework. This is the extent of my eclipse knowledge.

Given nothing else, eclipse is a system for managing plugins. Plugins can publish interfaces and 'extension points' (whatever that is) and they can state what interfaces they require for correct operation. They do this in an xml file, so Eclipse can delay-load everything based on a large plugin xml database.

The end goal of this project is to enable a live-coding environment in eclipse that is editing the eclipse application it is running in. I will attempt to do this using clojure but I don't actually know how possible this really is. It should be possible and if I am successful it would be a darn cool way to specialize the editing platform.

First off, I want to really understand what eclipse is. What datastructures are involved and how do you talk to them. I will want to understand what eclipse needs for various types of plugins and I will attempt to ascertain how amenable the system is to live-updating while it is running.

All I really need to do is figure out how I can build an eclipse plugin in clojure, however. There is a clojure plugin for eclipse already that is written in java but I will be god damned before I spend my free time working in java. I did that once and it took months of studying pure math for me to undo the dumness. I was awash in waves of dumnity and I almost drownded.

I am just going to study things like I usually do and update this blog as I study.   I am currently downloading the eclipse framework.  I already downloaded clojure and have no idea how to do anything with it but the repl.  Perhaps I can extend eclipse using the repl?  Is there (load-file "blah") functionality?  REPLs are nice, but typing things once is nicer.


Gotta start somewhere....

Opening the eclipse I was downloading. It is missing the "What the fuck are you?" button.

Reading the manual about the workspace, it appears that there is a small runtime kernel and a workspace on top of that. Where is the definition of the runtime component?

One thing going through my mind is 1. how ugly the interface is and 2. how slow it is to update. This isn't ideal; there really shouldn't be noticeable time between moving a window and seeing it change. I have been worried about this before and thus using the eclipse plugin framework with the QT UI framework definitely has come to mind.

Another thing I don't like about eclipse as it stands as it has moved some substantial project management functionality into its UI. These things seem orthogonal concepts to me...

perhaps a little closer:
http://www.eclipse.org/equinox/

Downloaded the osgi specification. I know this is overkill but I gotta. I hate it when you have to register for things to get them btw. It is a fucking spec, it is in everyone's interest for it to become common knowledge...

--"This aspect of the
Framework makes an installed bundle extensible after deployment: new
bundles can be installed for added features or existing bundles can be modi-
fied and updated without requiring the system to be restarted--"

This would seem to be the key enabler for a live-coding environment...

OK, reading a bunch about this stuff is interesting and I guess now it is time to start taking a bunch of other steps.

Overall, the OSGi system is an entire execution environment. Whereas when you compile a basic java program you get access to all of the java libraries you have referenced from your jar file, an OSGi bundle has access to libraries that it has declared it needed through a set of constraints.

So the classloader is specialized depending on your bundle specification and the existence of other bundles in the system. The environment takes care of things like loading two different versions of the same object, unloading and reloading shit and various other details.

What I was thinking was a lot lower level. If I implement stuff in clojure such that any interface I give to eclipse has a pointer to an implementation then I could conceivably just update the implementation pointer when I load the new file. Assuming I can find the interface I handed back to eclipse but I would assume in a forever-running runtime this should be possible. This would work regardless of OSGi nonsense, but at least there is one less term I have to filter out due to lack of information.

Next up I get some basic clojure editing working with my aquamacs emacs system. I got stuck earlier on classpath issues, but I need to read a lot more clojure editing environment tutorials. I am not certain what classpath means on a mac but I couldn't find a classpath environment variable....

Chris

Monday, December 1, 2008

Welcome to the club

I just paid another fee into the club of total pwnage. For those of you who don't live on computers, that means that I just got the receiving end of some shit. I had a part in that shit and it was wonderful but now things are all going lots of different ways.

As a lot of stories go, this one involves two women and one man. Their names and identities are withheld to protected the condemned.

One was an ex-gf, and one was a friend of the ex. There was some sleepin', seducin', and general debauchery goin' on, among things not fit to mention.

Needless to say, it all came crashing down hard with the man a victim of his own desire (first time for everything, hah!), and the women both victims of their own petty rivalries. Mutually assured destruction created mutual destruction.

There is a certain poetry to doing something really really dumb. Among that, ideally a certain amount of pain.

The whole situation just seems pretty contrived, really. I might also mention that the ex spent the night at the new boyfriend's house. And yet she was so angry about things in the morning.

The guy was seduced. A beautiful woman shows up at 12:30 at night at the house and what? You think the guy, who is not dating anyone, is going to just chill? Seduction rocks! Yay!

If anyone wants to try seducing me, btw, please, don't hold back. Furthermore, don't be surprised by the consequences.

Tuesday, November 18, 2008

Abstractions, Symbols, and Intelligence

Buckle up, settle down, and get ready for something exquisite. I am inspired tonight and what I present here should take a while to really sink in.

The primary use case of a computer is to augment human intelligence. We know that much. I feel, however, that in order to build an intelligence augmentation device you really need to understand how intelligence works.

People can only handle a certain amount of information at one time. No matter how smart you are, the amount of simultaneous details you can keep in your head isn't really that high.

You have to build abstractions; and a given abstraction of some concept I will call a symbol.

It is funny how when you are learning new things how we fight building a given symbol. A lot of times, after something becomes clear I am amazed at how concise it turns out in my head. Many times I have really worked hard to figure out why I fought a given concept so hard. Math is usually like this; when we are alert but relaxed we can pay enough attention to really learn it and enjoy the feeling of new concepts bouncing around in our heads.

Given time you get comfortable with a symbol and you start to incorporate it into you view such that you don't even notice it. Then the symbol gets semi-randomly combined with other symbols and if the new symbol is interesting or enjoyable you remember the new symbol. This process continues indefinitely in all of us all the time.

You should take time to learn new concepts that are interesting and different from what you know. As you build layers of abstractions and symbols to really adapt to the new experience you will make associations and new connections between things you have really known for a long time but been unable to see from the new perspective. Thus learning N brand new and different things really teachings N^2 (N-squared) or more new things; intelligence and learning are at least exponential (this relies on the assumption that this really is a very new thing)!

In any case, we all think by making symbols and manipulating them in some sort of systematic fashion.

The symbols you are allowed to make in a programming language, btw, directly relates to how exact and high level of a sophisticated of a concept you can express with it. Higher level programming forces you to make these clearer and higher level abstractions. The type system provides the rules for combining the symbols you make with the programming language.

Now lets talk about programs and get completely specific and practical. Most programs are very limited in the forms of symbols they allow you to make. Here are a few examples of building symbols:

One is an abstraction over a collection of objects. I have all these paragraphs organized in a chapter. Now I can re-arrange these chapters and form a book. That is a literal hierarchy of symbols. You can bet that in order to have a book laid out in a sophisticated manner the author has built a lot of abstractions about what the chapters mean and exactly how the flow of the story wanders through the literature.

Another example would be that I have these groups of formulas in a single excel spreadsheet. In the user's mind, they are going to build some symbol out of that spreadsheet that allows them to reason about its capabilities without knowing every single line of code in it.

Second is the ability to extend a given symbol with new information. This form of abstraction is akin to a master-instance relationship where you are in effect saying "this item is just like that one, but it differs in a few aspects here". Artists really love to use this form to create crazy interesting concepts. Dj's, Hollywood, video game makers all use this form of abstraction to some greater but mostly lesser benefit.

It is very common on the web where you have a templating system so that a given site will have web pages with some standard styles, heading and footing but override a lot of other things. Most systems use templates incorrectly, however. One they aren't supposed to be something you copy and start from. If you can't change the final product by changing the template then you haven't built an abstraction; you have just added more information. In the common case of Word documents, if I have a bunch of documents based on a template I should be able to change the headers in all the documents by changing the header style of the template. This doesn't happen (you might argue that it is safer and that is fine; I am talking about being able to build complex abstractions; this requires re-evaluating concepts that you might already know and thus presents the risk inherent in that you are changing existing symbols you are talking about ensuring that you *can't* damage existing goods which means that real learning or abstracting isn't happening).

You see this in programs a lot where you like a set of programs to have a standard look at feel. You get used to using these programs quickly because you aren't focusing on building new symbols from low level concepts but instead you are understanding the differences between what you are doing and what the other program was doing. Ctrl-C in windows is copy in most programs. This means that while its meaning may be context specific the general mental symbol that stands for copy is consistent across programs. You needed fewer symbols to use the new system.

Lets look at how someone learns a song. You get the main feel of the piece; this is the first symbol you build. Then given this rough, broad symbol you continue adding detail and forming new symbols until you build the song out to a tolerable richness or accuracy of reproduction.

I can't immediately think of a third way of learning anything that doesn't fall into a combination of the two ways I stated before.

This is where orthogonality of concepts really comes in. Clear, orthogonal symbols are composable in ways that rough, non-orthogonal concepts are not. This sucks because creating new symbols, either through grouping or through templating is what learning really is.

Earlier tonight I realized that my picking technique was too strict. Playing guitar, doing some random drill I found out of a book to warm up my hands and clear my mind I realized that I could pick this drill several ways; I didn't need to really develop some strict strategy and pick it the exact same every time. I then tried many techniques to see which ones felt the best; the drill had a lot of jumping from one string to another and to get it to work well I tried a hybrid-alternate picking style where I arpeggiated the jumps. This meant that if the jump was down several strings I used a down-stroke on both strings, and then used alternate up-down picking where I was playing a line of notes on a single string. If the jump was up several strings then I used an up-stroke for both the last string before the jump and the first string after.

Then I realized that I could also pick it with a strict alternate picking style where I could pick down on the top string, thus moving the pick towards a lower string *move completely over the lower string* and then pick up on the lower string. This meant that I was technically doing more than the most minimal amount of work in order to pick the drill but I realized that it allowed my wrist to make a more natural and better timed stroke and I decided since I had the control to do the strict style I liked it better because it felt like the timing was easier.

This was an example where my symbol I had built for picking -- it had to be the most minimal cost route to getting things done -- was way too strict. The truth of the matter is that wrists are damn quick and for most situations easily move the pick fast enough. Doing things in the most comfortable way that allows you to play the notes is more important than mechanical efficiency. Knowing this then gave me a lot more confidence with songs because I started noticing all the places where I really had somewhat inconsistent picking but because I have worked on control so much it doesn't affect my ability to play the song.

This form of abstraction, templatization, becomes to strict when the template has too much information. This means that it eliminates possibilities that really should be valid.

So what is the point of all this? Just for once I am going to ask for some reader-participation. I just gave up a lot to come up with all that above, it has taken me years to figure all this out. So please, if you enjoyed the above explanation lets take it to the next step.

You have two distinct forms of building symbols, aggregation and templatization (or master-instance if you like).

1. Are there other distinct forms of building symbols?
2. How could you apply these ways of building symbols to the programs that we use? An example would be if you had a Word document, are the different ways you can aggregate and templatize the document? We are familiar with document templates, can you have a meta-template where you have a common template across templates? What about aggregations of the templates, what would that imply? Aggregations of pages make a chapter, but under what scenarios do you want to use a page as a template or master of another (revision control comes to mind).

Friday, November 14, 2008

Graphics Abstractions

I am not really sure how to start this but lets talk tonight about useful graphics abstractions; meaning lets run over lots of different ways you could groups items.

- Geometry buffers and index sets into a geometry buffer.
- A geometry buffer mixed with some sort of material system would give you something you could see, lets call it a geometry object.
- Anything with some 3d transform information you could call a node.
- Out of these node objects you can produce a scene graph.
- Take a subgraph of these meant to represent a single entity (like a person) and you have a model.
- For a given model there will be a set of animations that are meant to be applied to it, a model w/ possible animations you would call...what? Perhaps a character?
- A large scene graph possibly containing groups of models you could call a new model.
- For a given model it may make sense to have several given states the model could be in. Perhaps with armor or without, perhaps glowing or perhaps without.
- This includes the larger scene graph, thus you have groups of models, each given model has several states and most likely lots of animations you can apply to it.

This is all talking about immediate mode things, or I guess you could call them instance level things. But really any given model would perhaps have a canonical form along with an instance level form. You can really have several levels of instancing, perhaps with a master state that the other states reference and just change bits of.

Oh, by the way, the actual set of properties on a given object is not well defined. They can change. At runtime.

Now lets look at this from a difference perspective....

Lets say you have a given state of an object. Normally in a 3d environment there is a relatively high repetition of various assets, thus you have a bunch of chairs that all look identical or very similar.

So you introduce various different levels of master-instance relationships. Really what is happening is that you have different representations of a given object that you can write things to and read from. This may get too vague to follow quickly but in a product I am currently working with we have:

schema -> defines default properties
Library -> Defines some changes to defaults
State -> Defines some changes to the library, contains animation systems
Scene graph -> Appends some properties to some objects (like global transform).
Scripting -> Changes the final result of stuff and sets properties.
Render engine -> Renders the result of the pipeline.

Thus the processing pipeline for an arbitrary property on an object looks like:

schema -> library -> state -> animation -> scene graph -> scripting -> render engine.

Capiche? Lots going on here. Now, to get a various property you need to know which stage you care about looking at it.

By the way, the state->library section can be repeated; thus you have multiple states both in series and as siblings in some sort of 3d object state graph.

The point is that we are thinking about writing a giant object abstraction that takes all of these details into account. This will be a behemoth of an object database system, completely custom to what we are doing. We will be making huge abstractions and bundling them up into simple, minimal interfaces.

Now comes the good part; why is this important?

Because people think in abstractions and symbols. They like to design things in abstractions and symbols. Plus they like to take an object, use it but change it slightly. So there are two large abstractions we are supporting generically.

The first is to allow arbitrary groupings of objects and to name these groupings. Then you should be able to use them as a distinct unit. A set of animations bundled into an animation group could define a running movement where you are animating a lot of things.

Sets of these groupings could be used to mark the set of animations that are explicitly used for a given model, which is itself a grouping mechanism for various other details.

Given a model (ignoring animation for a moment) and you may want it in different states where it is blue or red or perhaps armored as I said before or perhaps otherwise.

Building these large hierarchies of abstractions is what allows us as feeble humans to actually achieve very large things; it is important that our software recognizes and reflects this paradigm.

Next we like templates, or prototypes that we can then use and change without changing the source data. We like instancing things from master relationships. I am not certain how we like all these things to work out yet in my head but I am just now beginning to see what the next step in 3d graphics composition and application design really is.

Chris

Saturday, November 8, 2008

Creating software 101

Lets take a look at what is required to manage the development of medium to large scale software products.

Someone has a big idea. From my perspective, someone identifies a need and a customer base.

This boils down to a set of vague features. You can group features in several different ways, but you need to decide on a hierarchy of them; each level could be a release.

This is where the first set of problems come in. Picking the features. You need to, as constructively and objectively as possible figure out which features are going to hit the sweet spot of not a lot of work for a shitload of cash.

As a small aside, what it boils down to for me is the most functionality for the fewest lines of code. I look at every single line in the system as a liability; something that needs to be tested, verified, refactored, and all manner of other forms of maintained.

In the real world, however, not in Chris' world, what it boils down to is amount of cash made per line which I guess has a direct correlation to hourly rate. What the entire company should be interested in is making the most money with the smallest, tightest code base. You *don't* want your developers writing code as fast as they can every day. Ideally you want them refactoring old code, redesigning modules to take in new information, and shaking the last few bugs out of old features.

Anyway, we have features. Lets say some miracle happened and your expert marketing department did their job and picked a set that would be dynamite.

So far we have:

Idea -> Customer Research -> Master Feature Set -> Badass telepathic marketing research -> Beta, Alpha, and Release Feature set.

OK, we just got that far. There is another factor in the equation, however, and that is how much work each feature requires. The design and dev team need to figure out some kind of rough map and communicate this back to marketing so that the feature set hits the sweet spot between the least amount of effort for the most money.

But we haven't talked about the design or dev team yet, but there is an iterative sequence and feedback loop that happens throughout the process and one iteration looks like this:

Feature -> Design -> Dev -> Cost Analysis -> Badass telepathic marketing research -> New Better Simpler Feature

The cheapest feature to implement is the one you don't do. Never ever forget this. It is far *far* cheaper to remove details at this level, the highest level than at the design or the pump-code level. Nothing comes for free and each new feature has an n^2 effect on complexity because it will interact with existing features and make adding a new feature harder. In addition it makes the testing matrix larger and gives your sales team another detail to get tripped up on while they are trying to figure out where a potential customer is coming from.

Each piece of implemented system brings, along with the promise of cold, hard cash, the threat of carrying heavy chains of senseless complexity and pointless detail into each and every design decision later on. So think long and very very hard about exactly what you are going to do before you begin the process of doing it.

OK, lets say we have a feature set we are confident in. Now comes the fun part; you have an iterative process between a design and customer advocation team and the development team where the look, feel, and functionality of the product is hammered out to meet each largish feature. This should involve mock-ups, prototypes, using existing programs to see what the customer base is used to, artistic talent and a good, pragmatic eye towards a minimal cost route.

We the break features down into stories, stories into lists of requirements, and requirements into lists of tasks. The more thorough you are with this breakdown the better as it allows a clearer picture of the work required to move forward.

There isn't any software that does this well but really what you want is to build a large graph of dependencies here. This is because a given set of stories may generate interleaving requirements on any given software module.

What is useful is to be able to ask a question like "If we eliminate X, how much less work is it". Along with this you have the converse: "If we add Y, what is the impact on the system?".

So, in a hierarch of generators, we have:

Feature -> Stories -> Requirements -> Tasks.

This maps reasonably well to the actual work required to do anything. Each of these arrows is a 1 -> N relationship although multiple stories may generate the same requirement from the software's perspective. For example if you have a good serialization system then you can save/load the system as well as cut/paste between applications.

In any case, this should highlight just exactly how important it is that you come up with a minimal feature set. Then the design must be very good but also very smart so each feature generates the minimal set of stories. Finally the dev team needs to be careful with how they break down stories to end up with minimal requirements and minimal tasks.

Now we get into the details of what happens when you stop thinking about doing something and start doing it.

People, when testing the software will come up with all sorts of things. Some things they come up with are additional features of new stories. Other things they come up with will be actual defects where a given piece of software does not meet the specifications. Finally there will be details that are annoying but outside the scope of what was specified but they still should be tracked.

The design that I like is to have an issue database. This database is filtered by the design and product development teams to product a defect database, additional story features, and requests to redesign sections of the product.

The reason I have an issue database is that bugs are things that should require fairly immediate developer attention. These are defects in the system and release with a large number of them indicates a faulty process for creating software. These are like development gold; you rush to them, fix them to the best of your ability, and think about how they happened as these teach you a lot about how you are developing software.

Good design, both at the product level and at the engineering level is key to minimizing everything in the issue database. A good design at the product level makes certain kinds of problems impossible. A perfect design from the product design point of view means the customer *can't* make a mistake. A perfect design from an engineering point of view means that bugs can't happen.

It isn't that you have such a badass developer that they --do not-- make a mistake but they think about their code so much that they implement a design that --can not-- fail. There is a large difference between don't and can't. One requires discipline and one requires genius.

This is, btw, my problem with a lot of software. It is written with too much discipline and not enough genius but I digress...

Now lets think about what implementing something actually means. Lets start with a simple but perhaps non-obvious assumption.

Every time you add a line of code to the system you destabilize it to some extent. Thus when you are writing a bunch of code and adding capabilities you *will* have bugs there is not way around it. You will break old pieces that were working, even with unit tests and all manner of other stabilization devices. This is a fact of life; change means both moving forward and in some senses moving backwards and of course no one wants to move back.

Also lets make another assumption; the number of bugs, issues, and various other forms of feedback a feature or story will generate is directly dependent upon is complexity. This should be obvious; with two engines you are exactly twice as likely to have an engine failure as with one.

This, btw, is why a lot of dual engine airplanes are less safe than their single engine counterpart. It took people a long time to design a dual engine design that could fly capably with only one engine. This meant that there are a lot of aircraft designs that are twice as likely to crash when the initial idea was to have redundancy and thus greater safety.

In order to have a stable beta, alpha, and release you need to *not* be changing the software that much. Lets take a graph capabilities added to a software system. You want it to look like a bell curve, where the hump in the middle is that area of highest activity and you ramp into such development as well as ramp out of it.

This is because you want a stable product in the end. Thus you ramp up slowly, thinking a lot about design and how to accomplish what you are doing. Next you pump code and you QA team starts ripping you a new arse hole. Now you start switching resources to fixing QA issues and not so much adding new capabilities. As the release gets closer you begin to really focus on bugs and capabilities take a back seat.

The graph of bugs that people find will mirror your graph of adding capabilities to the system, just later in time. How much later depends on your ability to test the software effectively and in your ability to fix bugs where their solution will reveal or cause new bugs. You want your release to coincide with the point where your bugs have hit their point of largely diminishing return in terms of new bugs being found are by in large not worth fixing as they will have minimal customer impact or will be fixed by stories that are scheduled for after release.

You really want a good QA team. You want a rare combination of smart and disciplined for QA more than anything else. This is because if you QA team isn't smart then your best and brightest customers will find your worst bugs; thus you have lost some of your greatest advocates. If they aren't disciplined then they will not test all the combinations of features they could and your average customers will run into random issues just messing around with the product in an interesting way.

The QA team and the dev team should be drinking buddies but there shouldn't be animosity either. During a long release cycle, however, I know that I start to get aggravated and so does our QA dept and we stop speaking nearly as much.

In any case, a new story or capability is an issue generator but they don't generate all the issues right away. Bugs fixed will reveal new bugs and you will get chains of bugs that are very difficult to fix quickly.

Finally, there is a point when you want to show the world what you have done. You have faith in your marketing dept and their research is solid and smart. You have confidence in your customer research system, and of course in your big idea. Your product design team has been creative and done a great job of delineating a clear vision of how the product will look and feel from a customer perspective. The dev team is a team of patient, smart, tough geniuses who have produced smart, tight software design from day 1. You QA team doesn't take bullshit from anyone and while they can break most pieces of software just by looking at it they can't do touch your current hotness.

It is time for the demo, it is time to for everyone to work together and think out a set of scripts that will shock and awe, amaze and delight all potential customers and it is time to mobilize the sales force. These people shouldn't think about anything but cash. They need to be cut throat, they need to be able to really get into what they are selling but also be capable of reading each new prospect like a book. They are the front line, the marines, so to speak and now the fate of the entire operation rests on their shoulders. They need to take ground and what they do will ultimately make you all the money in the world or provide an excuse in your next job interview.

They will feed ideas back into the feature and story databases and will provide another source of information about how the product is working in the real world.

In any case, get a great idea and bring all of this complex machinery together and you are a long way above most companies in terms of your ability to bring great software to market. If any of these pieces are weak then your software, regardless of the vision or idea behind it, will not stand the test of time or customers.

Wednesday, October 29, 2008

Types and Languages Notes

Notes from:
http://channel9.msdn.com/shows/Going+Deep/Erik-Meijer-Gilad-Bracha-Mads-Torgersen-Perspectives-on-Programming-Language-Design-and-Evolution/

Mads is a real cool dude.

Gilad is smart but very conceited.

Eric is the man.

People coming from dynamic languages get used to typed languages that have primitive type systems.

Static typing where possible and dynamic typing where necessary.

Pluggable type systems allow type system specialization that is specific for the domain (Newspeak).

Type systems provide:
Early error detection.
Documentation
Design process as it forces structured thinking about the problem.
Quick feedback
Better intellisense on code; the extra information you put in the code helps the IDE provide better feedback.

Problem is that for new problems, the type system can get in the way.

Eric makes a point that for VB, they implemented a pluggable type system that allows intellisense on an XML document based on a schema or something along the line. This is interesting because the data you are dealing with extends the type system of the language.

Mads argues that allowing everyone to extend the language with typesystems would end up with a bablification of systems. I disagree with this; it is the standard argument that people have against adding any new feature to a language in that it "could" make the language harder to use. The same argument is used to argue against lisp type macros and lots of other really really smart things. It is always hypothetical bullshit; I have never seen a single paper that backs this argument up.

All the good stuff in computer science was done until about the late 70s.

Capability based security is based on keys. Control who actually has a function pointer to the piece of functionality that does the dangerous or protected functionality.

Not sure that I feel the rest of the article is really that productive. Gilad really dislikes the modern collection of languages.

He makes a point that inertia in languages is a really big deal. It sounds like he is frustrated which I can totally understand if you try to be a visionary in the computer languages field.

Dev tools in a purely dynamic language. The types are useful for refactoring. With a pluggable type system it really helps things out a lot. The value of the types is giving a formalism to describe things and describe new things.

The VB xml implementation is completely latebound and you still get intellisense.

Thus intellisense and type systems are completely orthogonal. A problem with Smalltalk was that it doesn't talk to the external world efficiently.

Newspeak runs on squeak. Potentially the newspeak could be ported to the CLR.

Tuesday, October 28, 2008

Playing guitar

I have decided to pick up the guitar again.

This will be approximately the 4th time I have started playing guitar after not playing for a while. A while being defined as 1 year or more.

In high school, I bought this ridiculously crappy guitar from some guy and played it like crazy. Then I got a really cool Jackson flying V guitar and played that. I believe I sold it when I was 19 or 20.

Next I didn't play for a long time. Then I moved in with people who had guitars and started playing again. Acoustic mostly but with some electrical influence.

Then I bought a really cheap guitar that had great pickups. I played it somewhat half-heartedly for a while, then gave it away to a girl who wanted to start performing.

Finally now I bought an acoustic guitar. This would be the nicest guitar I have owned in a while; it is a solid guitar for around $500. The next quality jump comes at around $1000. My theory is that if I can't earn that much money playing guitar then I am stuck with the guitar that I have.

The biggest difference this time is that I am not focusing on becoming the best guitar playing I can be. I am focusing on performance and playing for people. Somehow I believe this is the next step for me.

So I have a buddy who has already taken music pretty darn far. He has played a bunch, sings really well, and has performed at coffee shops all around Washington DC. We have decided to start a group that will play at coffee shops all around Boulder. If we find a drummer then perhaps we will actually be able to do gigs.

I have decided to take up singing, so I will probably do backup vocals and lead guitar. He will do the complement.

Now all I really have to do is learn to sing. I can barely sing right now; enough to get by but nowhere near where I want to be.

In any case, the point isn't to get all technical and precise with the music. The point for me is to meet people and make money. I don't need the money, mind you, but I look at it as vindication that we are doing the right thing. I guess the point is that I am not doing this just for myself; I have never really tried to perform anything and I would like to understand how that works. I figure if people really like what you are doing then they should have no problem paying for it.

Anyway, it is also kind of cool to be playing guitar again. It feels like I just can't really stop doing that. No matter how far I walk away from it I always come back to it. I guess I just really really love the way a guitar sounds.

Chris

Sunday, October 26, 2008

New Project

We got to version 1.0 of the product I was working on, and I am really happy with where it turned out. We stabilized the product quite well and it had a really solid feature set. Now we hand it to the marketing and sales experts and see if they can make it fly in the market we built it for. If it fails, it isn't going to be engineering related, that is for sure.

Now I will lead a small very good team of 5 people including me to add interesting features to an established piece of software.

This software is known to be unstable and quite complicated. Our job will to do whatever is necessary to add hard, intricate features to the software in a way that will not break it.

This is perhaps the hardest thing you can be asked to do in computer science. It is harder (although a lot less work) than writing your own software from scratch. You have to walk into extremely unstable territory and leave the system cleaner than it was when you first got into the problem.

Basically it is going to be a learning experience all around. I am not concerned about technical difficulties of doing what we want to do. I will use this project to test run how I want a development team to work in the long run. We fortunately have a valid problem so I feel that if we can get this working well then the development team ideas I have will be somewhat validated.

Given the most ideal situation, how would you like to run a dev team? How do you want it to feel? How would you schedule work?

I like people all focusing on similar things. When we try to solve problems I want everyone, all 5 of us checking things out and working as a real team. I guess another level is managing several teams, but I am not there yet. Managing one team well is something I have never done, but I have gotten much much closer.

Because the problems we are going to solve are very difficult, I want everyone working on each large feature (called a story) at once. We are going to all analyze the problem, come up with a solution, do groundwork to get a feel for the size of the changes, and finally think of a testing or regression strategy so we know we haven't broken anything with the newer system.

Then we will divide work and move forward. But everyone will be involved in each stage of work.

1. Come up with a possible solution.
2. Check feasability and amount of code that has to change. Perhaps move back to 1.
3. Consider what functionality will be refactored/re-written and what regression testing strategy to use to ensure we don't destabilize the product.
4. Figure out testing strategy for new code.
5. Divide up work and split into pairs or singles depending on problem.

I guess I just really want everyone on the same page. I want people to share and understand whatever overarching vision we come up with and also share in the considerations necessary to keep the product working.

More than anything else, I don't want people to work in their small corner. I have seen that work and I know that a few members of this team would happily do that but I just don't feel that model of development is how good software is done. I think it takes really open design process and lots of shared knowledge.

Anyway, what I really really want is everyone fully engaged and working with each other. I don't care what system is used, good development takes a serious team effort. I believe that is really what is lacking from the current processes that I look at; not enough good team work at the architectural and very abstract level.

Chris

Saturday, October 18, 2008

Thinking about programming

I am going to have a normal pace of life and a normal work life coming up soon. Thus I have started thinking about what the next phase of my project is.

The project is to build an environment to allow the creation of the most beautiful digital art imaginable through the use of animation, 3d, and 2d composition. The gist is to build an environment where it is easy to build scenes, animations, create really interesting effects and perhaps presentations.

This is a real-tme environment, so it will not be built for offline processing. This means that the results are fit to be used as screen savers, in games, or whatever. But it will not be able to do pixar level computer graphics.

It is to be simple enough that anyone (any digital artist familiar with photoshop) can use it but allow extreme customization and sophisticated manipulations of the scene, animations, and anything else both from the user and programmatically.

This is not a content creation tool. I am not building photoshop nor Maya but I am looking for a composition environment much closer to After Effects or a game engine.

This is also going to be a mental device to help me really figure out what the next stage of graphics is going to look like from an authoring environment point of view.

Something interesting I look forward to is building a computer with both a larrabee style compute card and a high end NVIDIA card and building a rendering environment that will be able to efficiently and interestingly use both GPUs together. Perhaps I program most of the rendering to go on the NVIDIA GPU and just use the larrabee component for extremely sophisticated blending effects.

I believe that user creation of effects, mainly fullscreen effects but hopefully also new animation methods is extremely important. You can do such amazingly cool things when you can program multi pass shader based effects that I believe having it being easy to add new ones is really the best thing you could do.

I think having powerful compositional abilities on several levels as well as extreme application extensibility is key. I want it to be able to combine several 3d assets together into a model, and I believe it is important to be able to compose multiple models together into a new, more sophisticated model. I think animations should be equally composable, although the rules for composition animations are quite complex (like transitioning from a walk to a run in a 3d animated character). Collections of animations,

Composition and extension as well as application programability are key to the success of the system. You need to be able ot compose effects, compose models. You need to be able to combine and compose animations. Given a time context and an animation graph, I believe you should be able to embed one animation graph in another as well as run with suitable transitions from one tme context to another.

Combining and composing pieces of content are key to success. The user should easily be able to create "symbols", or name their specfic compositions and should also be able to compose and group symbols.

In any case, that is the overall vision. First, I need to get some simple opengl and application datastructures working. I am not going to use c++ for the entire application, but I want the view to be in c++. This gives me easy access to a lot of application libraries and also to gtk, or anything else I want to check out.

Then there will be some interface into a higher level language. I was thinking LISP for a long tme but I think I will use Haskell instead. LISP is a very, very beautiful language and perhaps the best dynamically typed language ever invented. It represents the epicenter of programming language beauty due to is extremely programmable compiler and extremely regular syntax.

Haskell, however, represents something else. It is what people are actively researching. A lot of the most interesting research papers and written in Haskell and I believe that currently, it represents the forefront of the absolute best idea a lot of smart people have about what a programming language should be.

In short, I think that LISP is amazing and beautiful in its simplicity. I believe that Haskell represents the immediate future of a very interesting view of programming is the best language to learn moving forward if you really want to get down to the bottom of this whole language thing.

I have over the course of 2 years researched different aspects of the project. I first research xml and xml-schema very heavily because I thought I wanted the application to be built around a schema graph based on actual XML schema (with some natural extensions). I believe now, however, that the core of the system needs to be built around a custom schema graph with a custom data graph representing instantiations of objects in the schema graph.

It is interesting because a schema represents a graph generator. The data graph represents one possible instantiated graph. I believe that building the core of the data model around this design will allow for the level of extensibility I require for a good 3d data model.

If I separate the process of getting values from the graph then I can put, in essence, put caching layers or animating and processing graph layers in between the data graph and a client of the data graph (like a depiction a 3d scene).

Implementing this graph idea in c++ is relatively simple and straight forward. Implementing it in F# was pretty easy once I had a clear vision of what the schema graph, datagraph, and whole schema graph database system should look like. Now I want to implement this in Haskell and see what comes out of it.

I also need to design a protocol for keeping two data graph databases in synchronization. I would like the haskell code to be able to make arbitrary changes in the data graph and the scema graph and have the c++ implementation mirror those changes and vice versa.

This will involve a system for recording the changes being made to the database and for transferring this information down to binary and back. It will need to be a binary standard that I can implement simply in both c++ and in haskell. If I change a light property from red to blue I want to be able to send these changes and have the mirrored graph also have a light that is blue.

I will use this to communicate the windowing datastructure changes from language X to c++ and back again. Note that this will involve events as well as data updates, as a mouse click is an event and will not be represented in the graph.

In any case, I know what the data model and looks like and I know various other pieces. The next step is to research my previous work on transferring a schema and objects over a wire I did using c++ and lisp. I will probable start working on the same project again as I have such a clear vision of what it needs to be. Licada is active again.

Wednesday, September 24, 2008

Pain train is coming to town.

Holy cow I am very tired. In addition, the US capital markets seem to be going down quick and there isn't any real hope they will get better any time soon.

So, the quick summary is that for a long time housing prices had been rising. During this period some very questionable loans were made because *if* the house prices kept raising even if the homeoners couldn't pay the loan off the bank was left with a house that was worth more than the loan was worth. Thus from the bank's point of view it was a win-win. Huge adjustable rate loans matched to properties that couldn't drop in price.

Well the reality of the situation is that properties definitely can and do drop in prices. There is really no such thing as a safe real estate investment.

From the homeoner's point of view, when the property values rise this is awesome. Put down a 10 or 20 grand and you leverage a at least 2 or 3 hundred thousand dollars. If it rises, lets say 5% you have just about doubled your investment. This is called leverage and it really works both ways. Should it fall 5% you lost twice what you put in.

A lot of people bought houses they could barely afford in order to leverage the most amount of money they could. By barely I mean that for all intents and purposes they could not afford these houses.

Now lets say you can't really make the payments on the loan. Then you sell in the best case or default/foreclose in the worse case. If the property value has appreciated, then the bank wins whether you sell or whether you foreclose. If the property values drop, then the bank is only better off if you sell.

Lets say you have a lot of these properties. You are the bank. Now a ton of them have dropped substantially in value *and* your homeowners (odd term when there is a mortgage involved) can't pay you are screwed. Thus you lost a lot of cash.

In any case, this is probably a gross simplification but a huge amount of capital seems tied up in bad mortgages on bad properties.

The effect of banks going out of business is that everyone from you to your grandma to the farmer down the road cannot get capital easily. This will hit businesses increadibly hard because it is difficult to buy stock without capital. Thus the easier it is to get capital the easier it is to buy stock. This is also important because farmers need to buy seeds and, quite frankly for the US economy, people need to buy houses.

--digression--

A lot of stinky Europeans are talking about how much the US deserves this because we have an extremely unregulated financial market. I think this is a load of bullshit, personally. Europeans tend to be the most fiscally risk-averse people you have ever met. They like to have a single job for 1000 years and throw a fit when they might actually have to change that job. Of course our financial system is more unregulated but in the long run I think that it is probably more effective because of this. We think of all sorts of interesting ways to make money that you just can't do in Europe because the risks are too great. Every now and then there is going to be a serious correction; but at least we *can* buy a house. At least we *have* access to good fair credit.

So if you listen to a lot of BBC and various other outlets you will continually hear the reporters ask if this will lead to more regulation. I sincerely hope it will not; that regulation will make capital for small businesses harder to come by and make owning a home a much more distant proposition for millions of Americans.

--end digression--

In any case, we are going into a serious correction. This will most certainly end in a recession but it should not end with a depression. A lot of smaller businesses are going to have a hard time because they can't invest in stock required for their operation as easily. A lot of people are not going to have houses or are going to lose their houses and that is the way it is.

So lets talk about the government's buyout plan. First, let me explain my biases because I am sure they have colored my analysis of the situation.

When I hear the Bush administration talking about $700,000,000,000 of money going to someone I assume it will be the top 1% of Americans it will go towards and it looks like initially this is the case. I haven't made it into this percentage group yet so of course I think this is utter bullshit.

They want to buy the bad assets from the banks. This will let the banks off the hook and leave the taxpayer holding onto approximately $700,000,000,000 of foreclosing mortgages and poor property values. Should the taxpayer hold on to this long enough they will surely get this money back. The main question is how long is long enough and what is the real return on investment.

Notice that I didn't say they will buy the high-interest-rate ARMs from the home owners. The home owners get no protection; the banks and financial institutions get to continue business as usual. Most likely the managers of said banks will continue get their awesome bonuses just as they would have anyway and a lot of them will talk about how hard the crisis hit them to ensure the proletariate don't get too upset.

They needn't worry about it, however, as the proles will never revolt.

The Democrats in congress are attempting to provide the other side of the equation. They want some of this money to go to homeowners buy most likely refinance the sub-prime mortgages into ones that have a little more lenient interest rates.

Now, let me quickly say what is really going to happen. Should congress authorize this gigantic bill who controls all of this money of the long run? Somehow money will start flowing into some account, and some group of legislators will be put in charge. A bit of it will go towards the advertised uses; whether the banks or the homeowners. The rest will be buying the most cocaine, hookers, and bridges to nowhere you have ever seen in your life. Those senators will suddenly have amazing reelection campaigns because whoever stands to get even a little bit of that money will fucking hook a brotha up.


Which is why, at the end of the day this just will suck for a while and who knows what will happen. Everyone will say how they supported whatever solution the legislative branch comes up with and talk about how they were really working for the American people. These guys are assholes. First rate assholes.

Chris

Saturday, September 20, 2008

Blogging is hard

Writing blogs, at least for me, is one of the harder things i have ever tried to do.

I guess I imagine a reader who is as critical as I am of things, perhaps everything.

Depending on how cocky or conceited you think I am, you may consider me completely ignorant of how I sound to other people. Perhaps a few of you think that I have no idea my effect on other people or that I have no knowledge of how different I seem to other people.

I know that I come across as perhaps over-emotional and very sophomore. I also realize that I have had a really difficult time communicating the deeper things that I think about. As everyone I have ever met, I of course feel I have very intelligent, important things to talk about. Just like everyone else, however, I also know that I am an extremely poor judge of what will be considered intelligent, interesting or anything else to other people.

In retrospect, I don't find the really interesting bits of things I come up with in the blog. One of the things I really enjoy doing is shocking people with a deep or clever insight. I love the look in someone's eyes when you say something that they really consider to be abstract or interesting. It seems that my ability to do this is based, at least partly, on social circumstances. I cannot do it in writing; it is something I have to feel the flow of the conversation to really do it well.

I also feel, however, that it is at least partly rude or perhaps even violent to do it at will. Because I love to do it so much I really never considered the fact that it might upset someone or make someone feel uncomfortable. In a way it kind of rips control away from the person you do it too. A lot of times it may even be just showing off; perhaps a super sophisticated way to bully someone.

I guess the sad part of it also is that I appreciate sophistication and extremely subtle communication. Unfortunately the type of people who I would usually do it to may be exactly the type of people who would be most upset or uncomfortable with someone coming in and doing it to them.

Anyway, this is hard for me. I walk a line between trying to be unemotional enough to be intellectually stimulating while trying to also have the courage to express things that are hard for me; the expression of such does leave me a little more open than I am otherwise comfortable with.

Chris

Saturday, September 6, 2008

Amante thoughts

Back at the coffee shop, getting ready to go out tonight and having some tea and a gin n tonic.

This post is going to be quite idle, so tune out now if you have better things to do.

It occurred to me that what I really like about well-written software is its composability. The more composable a piece of software is the more options it gives people to do something cool with it.

I spoke with my father about building tools for the pathology lab and he had a superb insight. Please place this into context; my father and I are a lot alike but technical ability we don't share. He has the social ability that I lack and I have a higher order mathematical ability.

Anyway, he said that you want to make every tool as general as possible; this is what makes it useful. I didn't expect to hear that from someone who is not an engineer but it is one of the golden truths of computer science. Most likely it is the golden tool of anyone who builds and uses new tools in different ways; it then occurred to me that the diagnosis that he does probably involved quite a bit of problem solving. For some reason this never occurred to me.

This is the one primary advantage of functional programming if you are talking about the microscopic version of composability. It is also one primary advantage of open source software when are are talking about software development in the large.

I think we actually took quite a large step backwards in terms of software composability and reuse when we started compiling every down to binary. For some reason it seems that c,c++ based systems are inherently tougher to compose. Perhaps because malloc is a global, perhaps 1000 other reasons.

--context change--

It seems there is a lot of contention around garbage vs non-garbage-collected code. As far as monads and monadic forms are concerned, couldn't you consider the memory allocation system to be a monad? Isn't creating a new object implicitly changing state of the system in one way or another? Thus shouldn't every function that allocates new data use a monad passed in?

Granted this would be tedious but it would also allow you to use different memory management systems with different pieces of code.

Chris

Friday, August 22, 2008

A Change in Programming Style

I am sitting in Amante coffee shop in Boulder, just kind of doing a bit of code and waiting for the night to start. A salsa band is playing soon nearby and I need to burn some time.

A jazz band just started to play in the shop and they are quite melodic and smooth.

It is almost worth it to just sit here and chill, thinking about random thoughts about life and lambdas.

A while back I started to get a few symptoms of RSI. Mainly I was getting tendinitis in my fingers. It *still* isn't gone but it is a lot better than it used to be.

But that experience really prompted me to think about what I could possibly accomplish at the end of the day. It occurred to me that I really did a bit more work than I needed to. I would spend a lot of time typing something, think of a better way, and then spend a lot of time typing it again.

Needless to say, on large projects this was just not going to get me anywhere. I would also get frustrated with how mundane some of the code I was writing was and just writing absolutely as fast as I could. This was somewhat effective, but I also would not necessarily cover all of the bases when I did this.

Anyway, I still work pretty fast at times. This week is an intense week, I am working multi-pass effects into a somewhat sophisticated effect authoring system and really getting into it. They will be very powerful and quite beautiful when they are running, but what I can get done under the time allowed is only so much.

A large change has happened when I am working on my own home projects. Rarely do I spend more than about 10-20 minutes typing. A lot of time I spend just looking at the code and trying to figure out some way to do what I want to do with the least amount of typing.

I spend a lot more time researching different ideas and trying to figure out what language theorists are up to. I spend a lot of time just trying to visualize how I would like whatever I write to look like when I am done.

In any case, I don't type nearly as much. I also don't type nearly as quickly.

Now for an interesting although really obvious thought. The longest lived systems tend to be the most programmable. It is one thing to design your application for plugins and such but that is a very...boxed sort of programmability. Shove scheme or javascript into the application and enable a sort of live editing and updating. Excel does it pretty well! So do a handful of other applications.

But the point is, the more programmable you build the application the better off you are. Don't have a fixed data model, avoid anything fixed if it is an application of any significant functionality. Live coding is where it is at, why code any other way? I love F# and haskell; those work well for foundations. The top layer needs to be typeless and crazy dynamic.

Chris

Thursday, July 31, 2008

Time passes

Lots of applications incorporate animation. In fact, I would say that animation and interactivity are the two hardest things to add well to any application. I don't intend to address interactivity as I really don't know that much about it. So then we are left with time.

What exactly is animation? By animation, I generally mean interpolation; not frame-by-frame. I mean that you have something like two values, and an equation that takes you from value one to value two based off a third, independent variable.

This is a generally very useful and beautiful idea. Something really cool that Maya lets you do is it allows you to add a property that controls a set of other properties based off some interpolation of the original property. Lets say you have an animated face. You make it look like it isn't smiling and take a snapshot of your data (this snapshot is usually called a keyframe). Now you make it look like it is smiling and take another snapshot. Figure out the differences between the two of them and add a property where when the property is 0 the face isn't smiling and when it is 1 the face is smiling.

Crazy and simple; but the results of allowing the users to do this can *greatly* simplify a lot of tasks. Because the next thing an artist will do is to add a set of properties that describe a set of facial expressions and then try to play with all of them and see what happens. They then get completely bizarre output that is often delightful and have expressions on characters that no-one really understands either what the expression means *or* what is mathematically going on underneath the covers.

So there we have something *animating* due to user input. Now lets say you have a clock that continually increments based on time. You patch this value into the facial animation engine and all of a sudden the person may look like they are laughing (assuming the value is moded by its range). But that brings up another topic; what sort of transformations can you do on the input stream to get interesting behavior?

What if you take twice the range of the input and mod the clock by twice the range. If you time is in the upper half of the range then you run it backwards; if it is in lower have you run it forwards. This would be called ping-pong, and would make the animation bounce between the interpolations like someone dancing or doing facial exercises.

What if you multiply the input by a number? You can see that how you manage this input stream *also* gives you ranges of creativity and interesting effects?

So you have some function that takes input that ranges from 0-1 and produces output based on keyframes. You get all sorts of interesting properties by controlling this input. Lets say you use a bezier function to control this input range. Then you get bezier animation; except it is normalized so you can take the same bezier curve and apply it to several inputs. You can merge input streams by using a combination operator like divide or add (or subtract). You can do any number of crazy input nonsense and really produce some interesting stuff.

You can also setup processing graphs of these inputs. This will mimic behavior that makes a set of animations run in the *time context* of other animations.

So now lets get back to applications. Lots of applications allow animation. But none (or very few) of them allow you to setup arbitrary processing graphs to experiment with arbitrarily complex and clever animation systems. Breaking animation down into its components really allows you to do some interesting things.

For instance, what if the beginning value *isn't* a keyframe? What if it is based on something else; like an object's position or something like that. Then the animating object will animate from one object to a point in space. We called these dynamic keyframes; they are cool; I swear it. They allow you to mix interactivity with animation; without them you run into a lot of situations where you just can't get the object to move around reasonably.

We have an acyclic directed graph of floating point processing routines (at least; presumably other information could flow down this graph along with the floating point values). We have an object that generates a consistent increasing signal, we have things that will *reset* that signal when told so it appears to start from zero. We have sets of functions that given a floating point value can produce another floating point value. By combining these functions in clever ways we produce sophisticated and somewhat non-obvious such as character animation. But the point is that I think it would be cool to allow very open manipulation of this processing graph.

Chris