Loading web-font TeX/Main/Regular

Friday, December 25, 2015

Holiday tidings and Course Readjustment Day 25

Merry Christmas Merry Shalom Happy New Year.

Relevant GitHub code repositories:

SICP_1.32
SICP_1.33-1.34
SICP_1.40
SICP_1.41-1.46
SICP_2.01
SICP_2.02
ASJava_3.01-3.09
DemocratKarel.java

First work: I've done every exercise in the first chapter of SICP, a few in the second, I'm finishing the last exercises in the 4th chapter of Art and Science of Java, I finished lecture 2B of SICP, I am up to lecture 8 of CS61A Spring 2011, lecture 7 of CS106A. Today is Christmas, Day 25. GitHub repositories will be posted. OK.

I am not going to do every single exercise in SICP. That is crazy. I spent 4 hours yesterday trying to build a midpoint calculator for line segments out of constructors and selectors. Half of you reading will think that's baby stuff, the other half black-magic. I have to remember I have a timeline and I'm not doing anyone any favors by spending 95% of my energy trying to squeeze out the last 5% of understanding of CS theory - forget the 80-20 rule.

Two things that really made me think it's time to shift gears; I finally found the homework and problem sets for the Berkeley CS61A course: [here]. In fact it's a guys GitHub where he laid everything out. Thank you very much kind sir. Looking through the assignments I noticed.. UC Berkeley's famous EECS department is not making its students do every exercise in the book (maybe because they're not crazy). In fact, it's just a handful from each chapter. Well this was a welcome shock to me. The second thing is I took a look at the posting dates on that BilltheLizard guy's blog. I've been using his posts to help get me through when I'm stuck. His first post is in 2009. His last post, which is only 2/3 the way through the 2nd chapter, is in 2013. I'm a fucking idiot. But there are some things it's good to be wrong on, and answering diminishing returns with stubbornness is one of them.

So with that said, my work in the SICP book is going to be limited to the Berkeley homeworks + whatever I feel like doing. This should free up enough time for me to work on Electronic Circuits, Calculus, and maybe cure cancer.

On another note I had to get creative for a CS106A assignment. I'm not at Stanford so I don't have the little 'world' they want you to write the program for. So I went to YouTube, found a guy who explained how to use the editor, then I made a world that matched what was in the handout, did the programming assignment and it worked. Yay. Here's the handout: https://see.stanford.edu/materials/icspmcs106a/10-section-handout-1.pdf


I also cleared up the whole mess with Seton Hall, and now my transcripts are ordered for Stanford. It's a good thing they're lenient with extra materials, and it's completely understandable that, while they'll review an application without official transcripts, they will only make a final decision once they've got them. Anyway, even though the timing is off with the holiday season, they'll get there and worry on that front is over.

I decided on a final project for myself. A way of proving to myself that I've learned anything and moreso that I can actually do things. I want to do things with aerospace and technology... so.. okay simple. Build a drone. If I can put together pieces of metal and silicon and literally breathe life into the damn thing, then I don't need any more proof of what can be done.

The drone project also grounds my CS study, and gives a very real and immediate purpose to studying EE. It actually ties in a lot of things, but anyway.

So I'll be ramping up towards it. It's going to be some sort of arduino-controlled quadcopter. It's going to be fun putting together the mini-projects before hand where I teach myself the little skills I'll need to tie together later. Hey look, a crash course in Systems Engineering. So that's that.

I also haphazardly added another course to the work I'm doing. Since I cut out Codecademy as my lab (it's really on such a simple level that if I do it, I shouldn't really bother to count it) I have an extra slot. MIT OCW has this cool Intro to EECS course. I won't have access to the hands-on labs they'll do (but to be fair, I'm more than making up for that later so no complaints), but MIT put all their reading assignments online. I'm watching the lectures and doing the readings now.

I find it important because they emphasize getting different systems to work together and understanding that from its parts and as a whole. That was really the catch for me. That's about it for now. back to work, and happy holidays.

Friday, December 18, 2015

Update: Day 18

Relevant Code Repositories (my GitHub):
SICP_1.14-1.15
SICP_1.16
SICP_1.17
SICP_1.18
SICP_1.19
SICP_1.20
SICP_1.21-1.22
SICP_1.23-1.28
SICP_1.29 (+ work notes)
SICP_1.30
SICP_1.31

CS106A:

Assignment 1: Karel:
CollectNewspaperKarel.java
StoneMasonKarel.java
CheckerboardKarel.java
MidpointFindingKarel.java

Art & Sci of Java exercises:
ASJava_2.01-2.02
ASJava_2.04


I got into a bit of a working groove and kept putting off a new blog post. Now I got quite a bit to put down here. Before we go in, a note: I need to start making small notes as I write. I often realize something, the feel of creating an algorithm, or how the work seems to be getting easier, or any one of a number of things - but those are the types of thoughts that flash for a few minutes before getting absorbed into your subconsciousness; small steps on the path of development, but very hard to identify explicitly until long after. Noted. So anyway..

What I've done in the last 7 days (I'm writing this in the morning of the 18th, so I haven't started working today yet).

SICP, Structure & Interpretation of Computer Programs:

Split up between work in two classes of the same name: MIT's 6.001 (original lectures & readings) and CS61A spring 2011 (Berkeley's class, with extra readings). The exercises are feeling a lot easier than they did at first. SICP is also starting to take less of my time than it used to. (I'm noticing a feedback loop---)

Exercise 1.14 asks to sketch the tree of the recursive count change procedure. This was the first time I really got an appreciation of recursion. I also had to specify its time and spacial complexity. Here's my sketch below:


1.15 quickly shattered the feeling of comfort I got from 1.14, and really drove home the difference between Normal and Applicative order evaluation. I'm still not 100% on it, but I turned what was a linear-recursive process into a tree-recursive one because I expanded it wrong on paper.

1.16 involved writing an iterative exponentiation procedure through successive squares using the identity: (b^{n/2})^2 = (b^2)^{n/2}

1.17 asked to define a multiplication procedure similar to the <<fast-expt>> procedure in the book, that uses a logarithmic number of steps. Think of a logarithmic progression as the inverse of exponential (but in the same direction). Whereas with exponential increase additional inputs are magnified, in a logarithmic increase additional inputs give rise to diminishing returns. This is why in orders of growth, Log(n) < n < n^k. Figuring out how to write the procedure also helped me unpack some important concepts in the book. Not 100%, I still had to use BilltheLizard's blog for help (I now use that blog as a kind of minimum-speed governor: when I'm stuck after a set time of struggling, or I need to check my answers I go there).

1.18 I liked a lot. In this exercise you have to take the русский крестьянский способ, Russian peasant method, of multiplication and turn it into code. The method is iterative, and to me says something about human development: it's an iterative algorithm for multiplying numbers that requires very little thought; like the finger counting games your uncle showed you when you were little. Why and how did this come about? Well this algorithm is just a version of one invented (for all we know) in ancient Egypt. So what's up? Well us humans needed a way to perform multiplication... now how does any sentient animal with no prior knowledge of mathematics do accounting? For all intents and purposes we at that stage were very similar to computers: we only know how to do very simple operations. So the algorithm is a way to accomplish a more complex task using very rudimentary tools to get the job done. Tricks. But useful tricks. And without understanding the nature underlying what you're doing. I think that makes sense of much of human history, and perhaps provides context for understanding the growth of calculation ability in a more universal frame. What. Anyway. If it works for ancient Egyptians and Russian peasants, it'll work for my computer.

I managed to solve this algorithm on my own, only using Wikipedia to see the actual algorithm. That's practice I need: take a concept and algorithm, and convert that into code. It's a great feeling when it works. The algorithm works thusly: take two numbers being multiplied, X and Y. Halve X and double Y to form a pair, and do it again until X = 1 (ignore remainders). The product of X and Y is the sum of all sub X & Y pairs, ignoring pairs where X is even. Example:

13 * 238:
13    238
6      476
3      952
1    1904  +
=    3094

Exercise 1.19 was harder and I had to look online to walk myself through the solution. Again, I'm always wary when I do this, but I'm counting on me having the judgement to decide when to let myself struggle and when the point of diminishing returns and inefficient use of time has been reached. This one was a case of using a mathematical proof and some math-jiujitsu to turn a recursive algorithm into an iterative one with a logarithmic number of steps. It involved showing that the Fibonacci sequence could be represented as a transformation function, which itself could be represented as a special case of a more general function \rightarrow which it is your job to prove by showing that the special case can be written in terms of the general case, then turn it into a procedure. Notes in GitHub link above.

Exercise 1.20 stumped me one night, then I worked through it in 2 hours in the main library of Rutgers U. It asks you to write out the entire process taken by the <<gcd>> procedure evaluated in Normal and Applicative order. Applicative was straight-forward, Normal not so. I took a brute force method to get through this one: I put up a solution online, and started walking myself through it line by line until I understood what was going on. Then I started writing the entire evaluation process, checking my work at the end with the solution. This taught my the usefulness of indenting in Scheme, and some of the logic of Normal-Order evaluation.

For 1.21 and 1.22 were made more challenging because a necessary procedure, <<runtime>> was not available on the interpreter I use. I tried setting up the SISC interpreter which would use the <<current-inexact-milliseconds>> workaround, but after setting up Java to not block it on Firefox, no luck. I just did my best writing the procedures (timed tests for determining & finding prime numbers), and checking with results online.

Skipping through a bit, the rest was just work. I learned about how you can write a series-summation procedure and convert it into a product one; and for both define them either iteratively or recursively. This was further reinforced in Exercise 1.31 where I had to take the Wallis Formula for approximating Pi [See: Wolfram Mathworld, line (3)]. It was cool to see how surprisingly easy it was to take this big complicated formula, and just break it down into code; especially the series terms and how they progress.

The ease of 1.31 was probably made possible by the 5 hours I spent working on 1.29 where I took the Simpson Formula for approximating an integral and turned it into code. My mistake (though a useful mistake) was spending 4 hours working at the problem without looking at the help in the book. I was trying to reinvent the wheel by finding a way to capture the terms in the series and how they progress into sub-procedures; and for syntax I managed to confuse myself with how to define a variable in Scheme. Long story short, I learned a bit more on how to use summation for a series.

CS106A: Programming Methodology

After the pain of SICP, Stanford's intro CS course is like a gently breeze. I may be further along than I think, because, to my surprise, I find myself surprised by just how easy this class is. I'm also building better study/work habits, because a course is a totally different (and much more docile) animal when you do the readings and exercises before the lecture. This is my first time touching Java, and norms from SICP are spilling over. Finishing the Karel reader, I'm into the Art and Science of Java book and doing every exercise. I'm glad I took the time to figure out how to start a new Java project (wow is this language convoluted...too much overhead). I did the first assignment in the course, programming the little robot Karel to do a few tasks. Here I took what was theoretical in SICP and really reinforced and put it into practice; namely breaking problems down into smaller parts. There were 4 problems in the assignment. The first two were easy: pick up a marker on a tile, and write an algorithm to repair columns. The third: an algorithm to create a checkerboard, took me 10 hours. The biggest problem I had with that one was getting disorganized. There were too many branches in the algorithms logic, and my mind couldn't hold them all in at once, and it turned into a mess. Finally made it all work though. Because of the work on that problem, the 4th which was even more challenging, only took me 1 hour. It was simple: design an algorithm, map it out into its sub-procedures, write it. Fucker worked on the first run. That problem was designing and implementing an algorithm to find the midpoint of any line, without being able to count. I liked it a lot; it was using logic to solve a math problem. Cases like these make your eyes open to the possibilities of computation and what crazy stuff may lie ahead in the future. I like it. 

I made a short video demonstrating a few of the programs, and a snapshot of the code as well. Also included is a snowman from a book exercise.






I'm noticing a kind of feedback-loop in my work. What I learn in SICP - whether it be technical, habitual, or conceptual - comes back and reinforces my work in CS106A. Then what I've put together in CS106A, a much easier and forgiving environment, comes back and strengthens my ability to understand and solve problems in SICP. And this process becomes more powerful and accelerates with each completion of the loop. It's interesting and we'll see where it goes.

Still, it may seem great, but it's necessary. Maybe another way of looking at things is to be grateful that they're actually working. Computer Science & Programming for me is a sideshow. After I get through this, it's back to Physics and then on back to Engineering. By that time I should be either in school or a company (my own or someone else's), so I'll have to make time.

As a side note, it was interesting to hear Elon Musk at a talk at Stanford Futurefest note that the logic behind CS is that it isn't capital intensive. While it's nice to hear what I've thought for about a year being spoken back to me from the ether, it also makes me wonder if I'm being too slow.

Gur du way, eh? That's the update for Day 18, now back to work.

There is no such thing as fast enough.

Thursday, December 10, 2015

SICP 1.12, 1.13: Pascal's Triangle and proving Fibonacci Right

Much to my joy I was able to write a procedure that computes elements of Pascal's triangle, in only a few minutes, directly after spending 5 days trying to understand the exercise before.

It's made easier since it's a recursive call. The way it works is simple. And as a note before I begin, I do remember reading about methods to tackle this problem before, so this isn't something I figured out in a vacuum, but then again I think few things ever are. If you're not familiar with Pascal's triangle, it's a triangle made up of numbers, where each number is the sum of the two numbers above it to the left and right. It starts at 1 and grows basically exponentially.

To start this problem it helps to figure out how to represent the triangle in the computer. We can make life easy by pushing the triangle up against an imaginary left wall so that it'd look like this:

1
11
121
1331
14641
1......

An algorithm for computing values is immediately clear from this. Think in terms of row and column coordinates. The value of any one point is the sum of the point directly above it and the point directly above and to the left. As I was starting to figure at the end of my last post: whenever the row and column coordinates are equal, the value is one. Every first row point, regardless of column, has a value of 1.

Just to recap the pseudo-code-ish-math:

row(0) all = 1
#rows = col# + 1
if col# = row# => 1
if: row# = x, col# = y, & val = f(x, y), then f(x,y) = f(x, y-1) + f(x-1, y-1)

So there are three conditions plus one recursive call in this procedure. The recursive call is what makes this procedure work and it's goes thusly:

( + (pascal row (- col 1)) (pascal (- row 1) (- col 1)))

-- which translates to: f(x, y-1) + f(x-1, y-1)

Now we put that all together to build our procedure:

(define (pascal row col)
    (cond ((= row col) 1)
              ((= row 0) 1)
              ((> row col) #f)
              ((+ (pascal row (- col 1)) (pascal (- row 1) (- col 1)) )) ))
              ))

And it works! I use the Scheme interpreter on repl.it to check the output. Once last note on this procedure: On my first run I forgot to add a breaking function for illegal coordinates. There are no values for points outside Pascal's triangle (I'm sure we can say they're zero, but that's not coded in here), so the procedure will call itself, see that none of the conditions are met, call itself, and repeat, infinitely. So to break out of this loop I wrote the condition that if the row coordinate is larger than the column's, return a value of False (#f in Scheme).

And that's that.


SICP 1.13

This one I thought I wouldn't do, then did anyway. The book asks you to provide a mathematical proof that Fib(n) (the fibonacci procedure) is the closest integer to \phi^n/\sqrt5, where \phi = (1+\sqrt5)/2 (golden ratio).

I went ahead with this b/c I thought it'd be important for me to have a better familiarity with mathematical proofs. So I pulled up a post by BilltheLizard and used that to walk me through the process. He wrote a quick procedure to provide a bit of empirical evidence that I was interested in. I'll get to that in a bit.

So the information we're given is:

\phi = (1+\sqrt5)/2

And we're given the hint that:

\psi = (1 -\sqrt5)/2

Both these have the property that:

\phi^2 = \phi+1   \phi = 1 + \phi^{-1}   1 = \phi-\phi^{-1}
\psi^2 = \psi+1   \psi = 1 + \psi^{-1}  1 = \psi-\psi^{-1}

This will come in handy when the Algebra comes in. Now to go ahead and break this full proof into parts: there are 3: First we prove that to start, Fib(n) indeed equals (\phi^n-\psi^n)/\sqrt5  After that we do what's called the Inductive Step. As I understand it from a Linear Algebra class, mathematical induction is the process of proving something true by showing that: if the thing is true for a base case (let's say 1), and if it is true for an arbitrary case (k), then if it is true for any k + 1 case, it is true for all cases.

We'll do this by proving the defining recurrence relation of the Fibonacci sequence; that is, that any point on the Fibonacci sequence is equal to the sum of the 2 points before it. So we must prove that:

Fib(n+1) = Fib(n) + Fib(n-1)

And this relies on the first proof, because we're going to substitute the math formula for Fib().

Finally, to prove that Fib(n) is the closest integer to \phi^n/\sqrt5, we have to prove that the difference between Fib(n) and \phi^n/\sqrt5 is always less than 1/2

So let's get started:

[Step 1]:  Prove Fib(n) = (\phi^n-\psi^n)/\sqrt5

Compare Fib(n)'s output to (\phi^n-\psi^n)/\sqrt5's output:

Fib(0) = 0
Fib(1) = 1
Fib(2) = Fib(1) + Fib(0) = 1 + 0 = 1
-------------------------------------------
For n = 0:

(\phi^n - \psi^n)/\sqrt5 = (\phi^0 - \psi^0)/\sqrt5 = (1 - 1)/\sqrt5 = 0

For n = 1:

(\phi^n - \psi^n)/\sqrt5 = (\phi^1 - \psi^1)/\sqrt5 
= ((1+\sqrt5)/2 - ((1-\sqrt5)/2))/\sqrt5 = (1/2+\sqrt5/2 - 1/2 + \sqrt5/2)/\sqrt5
= ((2\sqrt5)/2)/\sqrt5 = \sqrt5/\sqrt5 = 1

For n = 2:

(\phi^2 - \psi^2)/\sqrt5 = (((1+\sqrt5)/2)^2-((1-\sqrt5)/2)^2)/\sqrt5
= (\phi-\psi)/\sqrt5 = \cdot\cdot\cdot = 1

As an aside, the math checking I had to do so I could substitute in the properties in the last part:

(1+\sqrt5)/2\cdot(1+\sqrt5)/2 = (1/2+\sqrt5/2)\cdot(1/2+\sqrt5/2)
1/4+\sqrt5/4+\sqrt5/4+5/4 = 6/4 + \sqrt5/2
= 3/2 + \sqrt5/2 + 1 + (1+\sqrt5)/2

Now this is all proven and made redundant by remembering the property from before:

\phi = (1+\sqrt5)/2 \Rightarrow \phi^2 = \phi + 1   oops...Right..

[Step 2]: Assuming Fib(n) = (\phi^n-\psi^n)/\sqrt5, and Fib(n-1) = (\phi^{n-1}-\psi^{n-1})/\sqrt5
Prove: Fib(n+1) = (\phi^{n+1}-\psi^{n+1})/\sqrt5

Fib(n+1) = Fib(n) + Fib(n-1)
= (\phi^n-\psi^n)/\sqrt5 + (\phi^{n-1}-\psi^{n-1})/\sqrt5 = (\phi^n-\psi^n+\phi^{n-1}-\psi^{n-1})/\sqrt5
= ((\phi^n+\phi^{n-1})-(\psi^n+\psi^{n-1}))/\sqrt5
= (\phi^{n+1}(\phi^{-1}+\phi^{-2})-\psi^{n+1}(\psi^{-1}\psi^{-2}))/\sqrt5
= (\phi^{n+1}\phi^{-1}(1+\phi^{-1})-\psi^{n+1}\psi^{-1}(1+\psi^{-1}))/\sqrt5
= (\phi^{n+1}\phi^{-1}\phi-\psi^{n+1}\psi^{-1}\psi)/\sqrt5
= (\phi^{n+1}-\psi^{n+1})/\sqrt5

Bam.

[Step 3]: Prove Fib(n) is the closest integer to (\phi^n)/\sqrt5

Fib(n) = (\phi^n-\psi^n)/\sqrt5 = \phi^{n}/\sqrt5-\psi^{n}/\sqrt5 \Rightarrow -\phi^n/\sqrt5 = \psi^{n}/\sqrt5 \Leftarrow\Rightarrow Fib(n)- \phi^{n}/\sqrt5 = \psi^{n}/\sqrt5

\rightarrow We want to prove that the difference between Fib(n) & \phi^{n}/\sqrt5 is always \lt 1/2

Prove: \psi^{n}/\sqrt5 \le 1/2   йа:   \psi^{n} \le \sqrt5/2

\psi = (1-\sqrt5)/2 \approx -0.6183....

\rightarrow since n is an integer & n \ge 0, & \psi \lt 1

\Rightarrow We know / Вай хаъ:
\psi^n \le 1, for all integers n \lt 0

\rightarrow also: \sqrt5/2 \approx 1.118....

\rightarrow since  \psi^n \le 1 & \sqrt5/2 \gt 1

\Rightarrow \psi^n \le \sqrt5/2

\Rightarrow Fib(n) is the closest integer to \phi^n/\sqrt5

QED
QuantumElectroDynamics


ehh, it looked prettier in the notebook atleast



I almost forgot about that procedure..

Now the way BilltheLizard did a preliminary check for Fib(n) was he wrote a procedure that would output values of \phi^{n}/\sqrt5:

(define phi
    (/ (+ (sqrt 5)) 2)
(define (^ base exponent)
    (define (*^ exponent acc)
        (if (= exponent 0)
             acc
             (*^ (exponent 1) (* acc base))))
    (*^ exponent 1))
(define (f n)
    (/ (^ phi n) (sqrt 5)))

This stumped me for a while but it's actually a very simple procedure. What it does is define exponentiation of the form X^Y.

It does this by multiplying a base by itself a number of times specified by its exponent, using an accumulator (acc) to hold incremental values of the exponent. I actually think it's very elegant.

This base exponentiation mechanism essentially works like this:

( *^   exponent   accumulator )
    ( if   ( =   exponent   0 ) accumulator
else \rightarrow       ( *^ ( - exponent 1 ) ( *   accumulator   base )

so for 2^3:
   \rightarrow (^ 2 3)
       \rightarrow (*^ 3 1)
           \rightarrow 3 \neq 0 \Rightarrow (*^ (- 3 1) (* 1 2) )
                     = (*^ 2 2 )
                     \rightarrow 2 \neq 0 \Rightarrow (*^ (- 2 1) (* 2 2) )
                               = (*^ (- 1 1) (* 4 2) )
                               = (*^ 0 8 )
                                 \rightarrow 0 = 0
\Rightarrow return value: accumulator = 8

And that's all there is to it. I wonder how effective this method of learning is, we'll see. Day 10.

Wednesday, December 9, 2015

SICP 1.11

SICP Exercise 1.11

GitHub code

This is a major early breakthrough for me. What's turning into the core of my study of Informatiks is this book, Structure and Interpretation of Computer Programs. I'm also keeping words from StackExchange's CEO, Joel Spolsky, in mind; and that's to allow myself to struggle (for worryingly endless hours) and build a deep understanding (that blog post was a great read too, btw).

Exercise 1.11 in Chapter 1.2 of SICP asks you to take a recursively-defined function and turn it into a Scheme procedure, as a recursive version and iterative version.

f(n) = \{n, if  n \lt 3 \}
f(n) = \{f(n-1) + 2f(n-2) + 3f(n-3), if n \ge 3 \}

Now, in my last post I wrote the recursive version. When I figured it out it felt like an achievement. Then the iterative challenge laughed in my face for 5 days. To recap, here's the recursive version. It's straightforward since the mathematical function is itself defined recursively; it was more like practice coding in Scheme.

(define (recur-f n)
    (if (< n 3)
n
(+ (recur-f (- n 1)) (* 2(recur-f (- n 2))) (* 3 (recur-f (- n 3))))
))

I'm calling the function 'recur-f' instead of 'f' to distinguish it from 'iter-f' version coming up. So up above, you define a procedure, name it, and specify what arguments it'll take as input. After that's an 'if' condition which says if n is less than 3, output n; otherwise add the recursive function applied to (n - 1) to 2 times the recursive function applied to (n - 2) to 3 times the recursive function applied to (n - 3) - which is exactly the math formula above, expressed in code. If n minus whatever is still not less than 3 in any of the recursive function calls, then imagine the same process repeating for each instance of n not being less than 3, ending only when it is.

Okay. Now, to do this iteratively we need to understand two things - and I think this part of what took me so long to figure it out. Firstly is the difference between recursion and iteration as a computer sees it. This finally stuck for me today.

A Recursive procedure is going to make successive calls to itself, yes, but the key point is that the current state of the computation cannot be determined by looking at the variables at that stage. It is computation nested within computation; if you cut out a stage the whole thing would break. It's essentially saying "I'll do this other thing later, and a critical part for that other thing is the result of this thing I'm doing now"

To be honest it makes me think of a talk by Stephen Wolfram at SXSW 2015 where he spoke about the power of using computation as a fundamental tool to scientific research, and the advantages over relying solely on equations. Basically very simple code can generate very complex x-y-z that'd be hell to express with an equation. Not to get too far off with a tangent, but that also reminds me of a thing I'd say about "thinking 4-dimensionally" Whether that 4th dimension is time or whatever, or if I really meant 3 vs 2, doesn't matter. The point was a lot of thinking I see is looking at fixed states: how the world is today, how a company is doing at this instant, but I rarely see speech reflecting thought that either looks forward into the future, or in some way captures mutual-feedback mechanisms. I'm doing a terrible job of translating this thought to written English.

Point is: cool! perhaps not everything can be captured by a single snapshot of it on paper, and the world is much more dynamic, and more of a system than a unitary thing. And perhaps the next stage of human thought is.. ah forget it I'll explain more when I have a better handle of what I'm talking about.

So back to Iteration. It's defining feature is that all information about the procedure's state can be found by whatever stage's state variables. These are explicit variables that tell the stage of computation or what's going on. For example say we have a procedure that adds two numbers by decreasing the first by 1 and increasing the second by 1, successively until you're left with 0 and a total:

(+ 3 2)
(+ 2 3)
(+ 1 4)
(+ 0 5)
5

At every stage you know exactly where this operation is going. If we did this recursively (with inc being: increase by 1):

(+ 3 2)
(inc (+ 2 2))
(inc (inc (+ 1 2)))
(inc (inc (inc (0 2))))
(inc (inc (inc 2)))
(inc (inc 3))
(inc 4)
5

If we lose any stage of computation, or if we lost some of those inc calls, we'd destroy the entire operation. But, so far it seems, recursion is easier to define than iteration. To define a procedure iteratively, you have to be able to capture the entire process in your head and write it down explicitly. And that's why the math formula up top gave me such a hard time. How the hell do you turn that into code?

Well once you've spent enough time struggling and panicking, you go over solved examples and work through it enough that you begin to build your own logic. The way I did it was this:

Again the original math function:

f(n) = \{n, if  n \lt 3 \}
f(n) = \{f(n-1) + 2f(n-2) + 3f(n-3), if n \ge 3 \}

First we'll need 4 state variables; for f(n-1), f(n-2), f(n-3), and a counter - to track what stage of the computation we're in. We also need to understand how the variables are changing throughout the procedure. We'll define the first three as A, B, C, and count as our counter.

So what's the overall state of the function's first recursive call?

At f(3) we have: f(3-1) + 2f(3-2) + 3f(3-3) = 2 + 1 + 0 --- & this is at n = 3.

Using this as an initial condition (our iterative mechanism wouldn't have been activated otherwise), we have the variables: A = 2, B = 1, C = 0, and count = 3.

How this evolves: at n = 4, A now represents a recursive function call { f(4) }. Therefore the new value of A is (A_old) + 2(B) + 3(C) = A_new. B and C change as well. B_new = A_old, and C_new = B_old. The values shift over one variable, with A being a factory for generating each stage's new value. The counter will decrease by 1 at each iteration. When the counter reaches the (< 3) threshold, the process stops, and the current value of A is returned.

This gives the following procedure:

(define (iter A B C count)
(if (< count 3)
A
(iter (+ A (* 2 B) (* 3 C)) A B (- 1 count)) ))

The above is the internal iteration mechanism of our iterative procedure. We invoke it in the full procedure below (this is also where we specify initial conditions):

(define (iter-f n)
(if (< n 3)
n
(iter 2 1 0 n)) )
(define (iter A B C count)
(if (< count 3)
A
(iter (+ A (* 2 B) (* 3 C)) A B (- count 1)) ))

BAM!

I spent a lot of time trying to decipher the solutions at:

Also a funny note, I thought the Weiqun Zhang guy was some teenage wiz, and I started using that as a minimum bar against myself. Well it turns out the man is a professor/researcher/member of the Center for Computational Sciences and Engineering at the Lawrence Berkeley National Laboratory. Yeah, well done. At least I don't set the bar low......

That said, this is still only the beginning, it's Day 9. But I think putting the work in to lay strong foundations now will be priceless in the future. Gur du way...

Next exercise is building a recursive procedure that computes pascal's triangle. I've already pieced together what the math of the algorithm is:

Lay the triangle against the left wall:

0th row all = 1
#rows = col# + 1
if col# = row# => = 1

say row# = x, and col# = y, and f(x,y) = value, then:
f(x,y) = f(x, y-1) + f(x-1, y-1)

Starting count at 0, and pushing the triangle up against a left wall, the value of any point is the sum of the point directly above it, and directly above and left of it.

Anyway, that's it for now.

Tuesday, December 8, 2015

Update 2

I'm just over a week into the CS Course. I spent most of the last week working on SICP. Now, I have conflicting ideals fighting it out in my head. One side says I should look at the bigger picture and be content with being a generalist, the other says I should master what I touch. Spending 2 hours at a go on an exercise in the SICP book (in the first chapter no less) does raise questions: if I'm having this much difficulty now, what about later on? Then again, the voice of life experience chimes in: what is easy when you first start it? Not martial arts or any kind of physical training, not math, why is CS being difficult... when I'm alone teaching myself... why should that be special?

Watching the 2nd CS61A lecture w/ Brian Harvey made me feel a lot better. Here is a legendary professor and even he is making mistakes and forgetting things on screen. Aha so the world is filled with humans, not wizards. That's good.

In the end it was more of an emotional realignment - whatever that means, I think I just made it up. I don't have to be a Wozniak-level master at everything I touch - and I am on a timeline - so for the exercises: I'll read through them, attempt them, give it some time, then check the solutions online. There are 2 things I'm determining as important: that I struggle and try to assemble the logic of how the functions should be constructed, and that I decipher the reasoning in the solutions. I need to understand what's going on and why.

Another thing is, it looks like I've been making myself suffer just a tad too much. I'm reading before I watch a lecture. I've finished 1.3 of SICP, (still have some exercises from 1.2 and 1.3 to go through today) putting me on page 107. For the MIT 6.001 course that places me at around the 4th lecture, and for CS61A that's week 2 or 3... heh so I read ahead. But that makes the material in the lectures seem a hell of a lot simpler now. It's basically a smart guy talking about stuff I had to figure out on my own... which actually does a lot of cement it all. I'm liking it.

I started a GitHub repository to document all my coding work/notes in one place. GitHub.com/WNoxchi I think this also adds a layer of ... what's the word... accountability? trust? anyway, that; so it's not as if I'm making up an elaborate story on a blog. The work's there, the time stamps are there. I actually decided to set up an account after impressing myself. I coded the Fibonacci sequence in Scheme right before my eyes, typing as I spoke the pseudo-code to myself next to my sister. That Rain-Man moment was quickly followed by headache but it was uplifting.

Fibonacci:

(define (fib n)
    (fib-iter 1 0 n))
(define (fib-iter a b count)
    (if (= count 0)
        b
        (fib-iter (+ a b) a (- count 1))))

Exercise 1.11 in SICP asks to define a procedure recursively and iteratively that computes the function:

f(n) = \{n, if  n \lt 3 \}
f(n) = \{f(n-1) + 2f(n-2) + 3f(n-3), if n \ge 3 \}

Which isn't too hard to do recursively:

(define (f n)
    (if (< n 3) n
        (+ (f (- n 1)) (* 2 (f (- n 2))) (* 3 (f (-n 3))))
        ))

It does exactly what the formula says: if n is less than 3, return n; otherwise: add f of (n - 1), 2*f of (n-2) and 3*f of (n-3). Now, if n within those 3 function calls is greater or equal to 3, the function calls itself again and creates another 3 branches for each instance until the condition: (n \lt 3) is true. Then the procedure goes back up the tree, (or they look more like roots I guess) adding and multiplying as specified until it returns a single value.

That makes sense. Now.. how do I do that iteratively? I tried writing out the process on paper but 15 minutes in I was still lost. I have the solution from online, but I'll need to take time today to understand it.

Of course, there are much greater hurdles waiting just behind the door. I need to build a procedure that determines prime numbers by factoring, and something that does something by successive squares but I have to check the description. Fun.

But there is a feeling, at this point more like an echo of a shadow of it, of overarching power, or competence, or just the feeling and state of mind you have when you just know how to do something. On an instinctual level that may be what I'm chasing. What I know is I feel this magnetic draw to this entire field, of Informatiks and technology, and I've learned the hard way to listen to those instincts.

Another cool thing from a lecture: apparently Computer Science is a terrible name for the field. What it really is, and the name used in a bunch of European countries, is Informatics. I may spell it with a 'k' because why not. Anyway, that's the update. It's a lot of baby stuff right now. I'm still far off from being ready to attempt the problems at Project Euler. Since it'll be a little while until I'm able to buy my own Mac, I think I'll dual boot my main machine in Linux. All these CS classes use the UNIX command shell, so I better get familiar with it. Also I'll be able to play Kerbal Space Program without memory limits )))). I'm thinking Ubuntu Mate or LinuxMint Cinnamon.

Gonna read more of Superintelligence before I say something about it.

Thursday, December 3, 2015

Update 1

So to start, the CS Course started off as  a rolling train wreck that eventually found its way back onto the tracks. Waking early was an issue, but after a night of < 4 hours sleep, it looks like I forced my circadian clock back into alignment and I'm up at 7 now. I want to get that to 5 or 6, and try to finish my morning workout before sunrise.

I finished the 10th week hw set for 3.032.3x, the Material Mechanics course. It was about dislocations propagating at the atomic scale and I had to calculate how much individual atoms of a precipitate would slow disloc propagation in metal alloys. It was actually a lot easier than I thought. I'll have to copy solutions & important problems to my notes, and there'll be about a week until week 11's work is due. That was mostly day 1, 1 December.

2 December I finished the 2nd problem set for Calc I (18.01sc) and exam 1. I need to learn and get through this fast, and I always seem to 'just' almost get it, so I did the exam by checking my answers w/ the solutions right after I finished a problem. The idea is from Scott H. Young (who did his own "MIT Challenge"), and it's tightening the feedback loop. It works. There're a lot of 'oh that's why' 'oh shit, that's how it's done' I find I learn faster when I treat it like a physical skill.

Finally I got down to actual CS work. It was really kind of nothing, but ehh, a start. One of my courses is SICP: Structure and Interpretation of Computer Programs. I read online that a solid way to start CS is to go through the book and do all the examples. Now, I'm selfish, and I want the best, so I'm doing SICP two ways in parallel: MIT & Berkeley. I'm doing CS61A, watching Brian Harvey's Spring 2011 lectures, with reading assignments from his syllabus, and supplementing that with reading assignments in Composing Programs from John DeNero's CS61A syllabus. I don't have access to university resources and recitations/labs, and I haven't figured out yet how I'm going to do assignments and check them, so the more thorough I am the better, I'm guessing.

At the same time I'm watching the original MIT SICP lectures given to Hewlett Packard engineers and doing those readings, though it largely overlaps w/ Harvey's. For a Scheme interpreter, right now I'm using repl.it. Yesterday I read Ch 1.1 & 1.2 (they even have handy videos on the site; it's an online book) of Composing Programs, I'm already up to 1.1.6 (p22) of the SICP book, and for the great introductory tome of electrical engineering, Foundations of Analog and Digital Electronic Circuits (A. Agarwal & J. Lang), I'm on p130 (Node Method).

My goal is to get the 5 major courses on my main-line list done every day, and not have to spread them across 2 or more days. I'll update as it goes.

So that's about it, a lot of SICP work today, some Calculus administrativa  and a lecture, CS106A (Prgm Methd), 6.002x (Circ & Elctrncs), and a lab in Python and Java.

I also started reading Nick Bostrom's Superintelligence, after hearing about it for the 100th time, this time on the Hello Internet podcast. Love it. I'm on p60-something, I'll write my thoughts on the AI idea later. Хjинца: болх.