GitHub code
This is a major early breakthrough for me. What's turning into the core of my study of Informatiks is this book, Structure and Interpretation of Computer Programs. I'm also keeping words from StackExchange's CEO, Joel Spolsky, in mind; and that's to allow myself to struggle (for worryingly endless hours) and build a deep understanding (that blog post was a great read too, btw).
Exercise 1.11 in Chapter 1.2 of SICP asks you to take a recursively-defined function and turn it into a Scheme procedure, as a recursive version and iterative version.
f(n) = \{n, if n \lt 3 \}
f(n) = \{f(n-1) + 2f(n-2) + 3f(n-3), if n \ge 3 \}
Now, in my last post I wrote the recursive version. When I figured it out it felt like an achievement. Then the iterative challenge laughed in my face for 5 days. To recap, here's the recursive version. It's straightforward since the mathematical function is itself defined recursively; it was more like practice coding in Scheme.
(define (recur-f n)
(if (< n 3)
n
(+ (recur-f (- n 1)) (* 2(recur-f (- n 2))) (* 3 (recur-f (- n 3))))
))
I'm calling the function 'recur-f' instead of 'f' to distinguish it from 'iter-f' version coming up. So up above, you define a procedure, name it, and specify what arguments it'll take as input. After that's an 'if' condition which says if n is less than 3, output n; otherwise add the recursive function applied to (n - 1) to 2 times the recursive function applied to (n - 2) to 3 times the recursive function applied to (n - 3) - which is exactly the math formula above, expressed in code. If n minus whatever is still not less than 3 in any of the recursive function calls, then imagine the same process repeating for each instance of n not being less than 3, ending only when it is.
Okay. Now, to do this iteratively we need to understand two things - and I think this part of what took me so long to figure it out. Firstly is the difference between recursion and iteration as a computer sees it. This finally stuck for me today.
A Recursive procedure is going to make successive calls to itself, yes, but the key point is that the current state of the computation cannot be determined by looking at the variables at that stage. It is computation nested within computation; if you cut out a stage the whole thing would break. It's essentially saying "I'll do this other thing later, and a critical part for that other thing is the result of this thing I'm doing now"
To be honest it makes me think of a talk by Stephen Wolfram at SXSW 2015 where he spoke about the power of using computation as a fundamental tool to scientific research, and the advantages over relying solely on equations. Basically very simple code can generate very complex x-y-z that'd be hell to express with an equation. Not to get too far off with a tangent, but that also reminds me of a thing I'd say about "thinking 4-dimensionally" Whether that 4th dimension is time or whatever, or if I really meant 3 vs 2, doesn't matter. The point was a lot of thinking I see is looking at fixed states: how the world is today, how a company is doing at this instant, but I rarely see speech reflecting thought that either looks forward into the future, or in some way captures mutual-feedback mechanisms. I'm doing a terrible job of translating this thought to written English.
Point is: cool! perhaps not everything can be captured by a single snapshot of it on paper, and the world is much more dynamic, and more of a system than a unitary thing. And perhaps the next stage of human thought is.. ah forget it I'll explain more when I have a better handle of what I'm talking about.
So back to Iteration. It's defining feature is that all information about the procedure's state can be found by whatever stage's state variables. These are explicit variables that tell the stage of computation or what's going on. For example say we have a procedure that adds two numbers by decreasing the first by 1 and increasing the second by 1, successively until you're left with 0 and a total:
(+ 3 2)
(+ 2 3)
(+ 1 4)
(+ 0 5)
5
At every stage you know exactly where this operation is going. If we did this recursively (with inc being: increase by 1):
(+ 3 2)
(inc (+ 2 2))
(inc (inc (+ 1 2)))
(inc (inc (inc (0 2))))
(inc (inc (inc 2)))
(inc (inc 3))
(inc 4)
5
If we lose any stage of computation, or if we lost some of those inc calls, we'd destroy the entire operation. But, so far it seems, recursion is easier to define than iteration. To define a procedure iteratively, you have to be able to capture the entire process in your head and write it down explicitly. And that's why the math formula up top gave me such a hard time. How the hell do you turn that into code?
Well once you've spent enough time struggling and panicking, you go over solved examples and work through it enough that you begin to build your own logic. The way I did it was this:
Again the original math function:
f(n) = \{n, if n \lt 3 \}
f(n) = \{f(n-1) + 2f(n-2) + 3f(n-3), if n \ge 3 \}
First we'll need 4 state variables; for f(n-1), f(n-2), f(n-3), and a counter - to track what stage of the computation we're in. We also need to understand how the variables are changing throughout the procedure. We'll define the first three as A, B, C, and count as our counter.
So what's the overall state of the function's first recursive call?
At f(3) we have: f(3-1) + 2f(3-2) + 3f(3-3) = 2 + 1 + 0 --- & this is at n = 3.
Using this as an initial condition (our iterative mechanism wouldn't have been activated otherwise), we have the variables: A = 2, B = 1, C = 0, and count = 3.
How this evolves: at n = 4, A now represents a recursive function call { f(4) }. Therefore the new value of A is (A_old) + 2(B) + 3(C) = A_new. B and C change as well. B_new = A_old, and C_new = B_old. The values shift over one variable, with A being a factory for generating each stage's new value. The counter will decrease by 1 at each iteration. When the counter reaches the (< 3) threshold, the process stops, and the current value of A is returned.
This gives the following procedure:
(define (iter A B C count)
(if (< count 3)
A
(iter (+ A (* 2 B) (* 3 C)) A B (- 1 count)) ))
The above is the internal iteration mechanism of our iterative procedure. We invoke it in the full procedure below (this is also where we specify initial conditions):
(define (iter-f n)
(if (< n 3)
n
(iter 2 1 0 n)) )
(define (iter A B C count)
(if (< count 3)
A
(iter (+ A (* 2 B) (* 3 C)) A B (- count 1)) ))
BAM!
I spent a lot of time trying to decipher the solutions at:
and Weiqun Zhang's blog.
Also a funny note, I thought the Weiqun Zhang guy was some teenage wiz, and I started using that as a minimum bar against myself. Well it turns out the man is a professor/researcher/member of the Center for Computational Sciences and Engineering at the Lawrence Berkeley National Laboratory. Yeah, well done. At least I don't set the bar low......
That said, this is still only the beginning, it's Day 9. But I think putting the work in to lay strong foundations now will be priceless in the future. Gur du way...
Next exercise is building a recursive procedure that computes pascal's triangle. I've already pieced together what the math of the algorithm is:
Lay the triangle against the left wall:
0th row all = 1
#rows = col# + 1
if col# = row# => = 1
say row# = x, and col# = y, and f(x,y) = value, then:
f(x,y) = f(x, y-1) + f(x-1, y-1)
Starting count at 0, and pushing the triangle up against a left wall, the value of any point is the sum of the point directly above it, and directly above and left of it.
Anyway, that's it for now.
No comments:
Post a Comment