Tuesday, December 31, 2013

The Environment for New Normals

New normals come out of left field and strike like thunderbolts. We don't see them coming, but they seem stunningly obvious in hindsight. Going into the 1968 Olympics, everyone thought that the best technique for the high jump was the straddle. All it took was the performance of a single high jumper with a radical new technique, the Fosbury Flop, to sweep the straddle into the dust bin of history. Four short years later, at the 1972 Olympics, the Fosbury Flop was the new normal.

Meanwhile, there hasn't been a new normal in teaching and learning in my lifetime. I don't think there's been one in centuries, maybe even millennia. Again, I'm not talking about the institution of schooling; I'm talking about the intimate interaction among teacher, student, and content as seen from one foot away. At that distance, a teacher working with a student on multi-digit multiplication in the middle of a busy class looks the same as a tutor working with a student after school, a parent working with a child at the kitchen table, or a teacher working with a student in a one-room schoolhouse on the prairie in the 1870s. The tools may be different, but the basic strategies and techniques remain the same.

Is it possible to pull off a Fosbury Flop in education? I'm not sure, but I sure hope so. I think there are better techniques in teaching that are still undiscovered, and we need something that can shake our understanding of both teaching and learning to the core. Comparing the landscape of high jumping in the 1960s with the landscape of education today may help us understand how to bring about the new normal we need.


Variation and Experimentation


To arrive at a new normal, there needs to be some level of variation and experimentation that occurs at the bottom and bubbles up to the top. In the 1960s, all of the best high jumpers were busy refining the straddle technique, and it is very doubtful that any of them were going to break away to try something radically different. It took Dick Fosbury, an unknown high schooler at the time, to gamble and invest five years in a completely unproven technique.

If Dick Fosbury had experimented with the wrong technique, the flop would have failed. If Dick Fosbury had quit after four years before perfecting his technique, the flop would have failed. If Dick Fosbury had suffered a career-ending injury, the flop would have failed. If Dick Fosbury had lacked the physical gifts to jump higher than 2.13 meters (7 feet), the flop would have failed. New normals emerge because of individuals, but relying on a single individual for a new normal is not a good bet. Ideally, you want a thousand high jumpers working to perfect the straddle technique, and you want a thousand high jumpers working to find the next technique to replace the straddle.

It is hard for me to evaluate the level of variation and experimentation in high jumping in the 1960s because I don't know how many other kids were perfecting radical high jumping techniques in their backyards. What I do know is that the level of variation and experimentation in public schools has decreased significantly since I became a teacher in the 1990s. The standards movement led to a number of top-down reforms. Curriculum standards were removed from local control and transferred first to the state, and now to the national level. There has been a massive drive for consistency. It is now considered unacceptable for two fifth-grade science teachers in neighboring classrooms to each be doing their own thing. The goal is for everyone to collaborate and standardize curriculum and practices. Twenty years ago, half of the math teachers in Massachusetts were using math programs that were either home grown or cobbled together from multiple sources; today, virtually everyone uses a monolithic math program purchased from a major publisher.

To be fair, the variation that existed before the standards movement wasn't very productive. New normals didn't arise from this experimentation, and many of the experiments were pointless and yielded poor results. But you can't discover new normals without variation and experimentation, and those experiments cannot be centrally curated. After Dick Fosbury won the US Olympic Trials in 1968, the trials were held again because the US Olympic Committee still didn't believe in the flop and didn't want Fosbury on the team. If every high jumper on Dick Fosbury's high school track team had to standardize on a single technique, I don't think that Fosbury could have persuaded anyone in 1963 that his technique was superior and should be adopted by everyone.


Supervision and Coaching


When Dick Fosbury began working on the flop, he was a member of his high school track team. This means that his coaches and teammates were watching him jump. They could see both his technique and the heights he was clearing. Initially, his coaches wanted him to switch back to the straddle. But at some point, they could see that the flop was working for him (Fosbury eventually broke his high school's high jump record and placed second at the state meet) and started to give him tips and feedback to improve his technique as best they could.

On the other hand, teaching in schools largely happens behind closed doors. No adult really knows what you are doing. One of the motives behind the push for consistency across classrooms is to de-privatize teaching and to get teachers talking and sharing strategies and practices. Past efforts to break down walls include lesson studies and the formation of professional learning communities, but none of these efforts have been successful. Very few school districts employ coaches, and most teachers view coaches as nuisances rather than helps. Supervision consists of the principal coming by to observe a lesson twice a year every two years.

To get a sense of what supervision and coaching looks like in schools, imagine a high school track team where every member of the team practices in isolation. Twice a year, the athletic director, a former javelin thrower, swings by to check on you and to write a report for your file. Four times a year, the whole team gathers, but you may not do any high jumping on those days. And in October, you finally receive your scores from the state meet (the one formal competition of the year) that was held in April.


A Single Metric for Performance


Dick Fosbury made a lot of adjustments to the flop before he won the gold medal at the Olympics in 1968. One of them was learning that he had to adjust his point-of-takeoff as the high jump bar was raised. While making these adjustments, Fosbury was able to gauge whether his technique was improving or getting worse by observing the heights he was clearing. He was also able to compare his performance against established benchmarks, such as the high school high jump record and the world record. Without this data, would Fosbury have been able to improve the flop? Would he have persevered for five years? Would the US Olympic Committee have allowed him on the team? Would he have convinced the world his technique was better?

We don't have a single metric for performance in education. Even the most ardent supporter of standardized testing wouldn't argue for a single metric. Instead of adjusting our technique to jump higher, we are forced to adjust our technique to optimize the vertical force we exert at the point-of-takeoff or the curve of our back. We hope that will translate to higher jumps, but we don't really know. We also have no idea how our performance compares with the teacher across the hall, the teacher in the next town over, or the teachers in Shanghai.


Noisy Feedback


Feedback in educational systems tends to be extremely noisy. Part of that is because there isn't a single metric for performance, so one metric may indicate that you are improving, while another may indicate that you are getting worse. But feedback also suffers from lag and the number of variables in play. When you experiment with a new technique, it can take days, weeks, months, or even years before measurable student outcomes emerge. This makes it difficult to tease out the effect of the new technique from the effect of other intervening factors. It's like sitting at a console and mashing on buttons. After a few hours, you start to see some things happen, and you have to figure out which button, or sequence of buttons, caused which thing to happen.

The existence of numerous variables that are difficult to control for also contributes to noisy feedback and makes it hard to compare results across multiple trials. It's as though some high jumping competitions are occurring in sand with a strong headwind, while others are on spongy surfaces indoors. Because of this, research studies often contradict each other and we have to rely on meta-analyses across studies that often draw faulty conclusions. Many of these studies conclude that procedural techniques are more effective for teaching math. But drilling down, you will find that most teachers in these studies had more experience using procedural techniques and were ill-equipped to use the conceptual technique being studied. The longer, more in-depth studies that may tease out those effects carry much less weight in a meta-analysis, and these meta-analyses drive policy decisions.


The Olympic Stage


Dick Fosbury had the opportunity to win and demonstrate the Fosbury Flop in front of an audience of millions, including thousands of his fellow high jumpers. I can't think of anything comparable in the educational world. There are TED talks, but those are curated and you don't really have the same opportunity to demonstrate the performance of a new technique. Finland shocked the world by finishing first or second in math, science, and reading on the 2006 PISA, which caused educational researchers and policymakers to flock to Finland. But the educational techniques used in Finland were selected by someone at some point. Dick Fosbury didn't have to pass through any similar gatekeepers. He just had to jump higher than anyone else.


Incentives and Competition


The best high jump recorded using the flop is less than 5% higher than the best high jump recorded using the straddle. Yet, that tiny performance gain was enough to convince the entire world to switch from the straddle to the flop. Teddy Rice would argue that competition creates a sense of urgency that causes us to seize on incremental performance gains, no matter how small. I don't disagree with him, but I don't agree that the failure of schools to seize on similar incremental improvements is evidence that schools don't compete. I just think that the basis for competition is a little different.

While Dick Fosbury was developing the flop, the rest of the world was engaged in an arms race to perfect the straddle. The world was shocked by the Fosbury Flop not because the world was complacent, but because it was suffering from a case of tunnel vision. Hundreds of high jumpers were competing to be the best, and it looked like perfecting and mastering the straddle was the way to get there. When Fosbury won the gold medal at the 1968 Olympics, he showed everyone that there might be another way. A few high jumpers tried it and won. A few more tried it and won. New records were set, and soon the number of high jumpers trying the flop was a tidal wave. The transition was fast.

Let's contrast this with what happened when Finland topped the charts on the 2006 PISA. Finland didn't surprise the world because it used a new technique; it surprised the world because no one was expecting Finland to be at the top. Educational researchers and policymakers immediately flocked to Finland to see what they could learn, but the reports coming back were garbled. No one could tell which buttons Finland had pushed or if Finland's results could be duplicated by another country operating under another set of conditions.

I can guarantee you that if another country – such as Brazil – had taken lessons learned from Finland and started climbing up the rankings in 2009 and 2012, the world would have noticed and reacted urgently. But that didn't happen. Instead, Finland has fallen in the rankings and it is looking more and more like its performance in 2006 was more noise than signal.

The same thing happened in the 1990s with the TIMSS. This time the darlings were the Singapore math curriculum and lesson studies from Japan. Although widely adopted, neither technique has achieved much long-term success. The Fosbury Flop didn't become the new normal because Dick Fosbury won with it; it became the new normal because high jumpers across the world began to win with it. So far, there has been a disappointing pattern of that not happening when new techniques are adopted. This pattern of failure and the lack of a strong signal makes schools reluctant to jump on the next new thing.

Even worse, in education, we aren't making incremental progress on the techniques that we do have. In the United States, ideas such learning by doing, focusing on problem solving and critical thinking, cooperative learning, personalized education, community service and educating for social responsibility, and performance assessments were all widely practiced by the progressive education movement in the late nineteenth century. Our best teachers today are not practicing those techniques at a higher level than the best teachers were a hundred years ago. The state of the art has not moved much at all.

Instead of competing to raise the bar, it feels like we are competing to get more teachers to the bar. It makes sense when there are so many underperforming schools and teachers that could be doing so much better if we could just get them to apply the techniques we already have. And competing to be the best is pretty hard when there is so much noise in the system and we don't have a single metric for performance. But I still believe that schools will compete to be the best if something rises above the noise and they become convinced that a new normal is possible. Dick Fosbury did it with a 5% performance gain in high jumping in 1968. It is going to take more than that to do it in education in 2014. I guess it is up to us to figure out what that is and then to make it happen.

Sunday, December 29, 2013

The Fosbury Flop

In 1968, Dick Fosbury shocked the high jumping community by winning the Olympic gold medal in high jumping with the Fosbury Flop. The dominant technique at the time was the straddle. Thousands of high jumpers around the world were competing to squeeze as much performance out of the straddle as possible, and incremental improvements were quickly adopted and studied. But by 1972, 28 out of the 40 competitors in the high jump at the Olympics were using the Fosbury Flop, and the last world record set using the straddle was in 1978. The normal in high jumping before 1968 was the straddle; after, it was the flop.

My goal as founder and Chief Learning Officer of Vertical Learning Labs is to create a new normal in teaching and learning. My goal isn't to disrupt the institution of schooling, but to change how we think about teaching and learning at a one-to-one level. In my opinion, everyone is using the straddle technique today. This includes every classroom, our best teachers, charter schools, private schools, one-on-one tutors, homeschoolers, MOOCs, Khan Academy, flipped classrooms, camps, after school clubs, Finland and Singapore, parents at home in a teachable moment… we are all using the straddle. Researchers are studying expert straddlers and policymakers are trying to figure out how to get everyone to use those best practices, but none of that is good enough. We need a flop.

You don't arrive at a flop through iteration and incremental improvement. Advocates of procedural and conceptual learning are on neighboring hills sniping at each other. Each camp wants to pull everyone else uphill. Up their hill. Meanwhile, the flop is a mountain sitting miles away that towers over both of them. How do we get there?

Dick Fosbury developed his flop in complete isolation. He tried to use the straddle, but felt that the technique was too difficult for him to master. So he started from an outdated version of the scissor. It took Fosbury five years to refine the flop to the point where he could win the Olympics. In the early years, his coaches tried to talk him out of it until he began showing improvement and setting personal bests.

As a student of new normals, I have four questions when it comes to the Fosbury Flop:

  1. Would the flop still have appeared if Fosbury hadn't developed it?
  2. Was winning the Olympics necessary for the flop to gain traction and take over?
  3. How obvious is the flop in hindsight? How much better is it than the straddle?
  4. Is there another flop waiting in the wings to become the next new normal?

It appears as though the development of the flop happened just as high jumping was transitioning to softer landing zones that made it safer to land on one's back, neck, and head. The flop wasn't actually feasible at world record heights until the 1960s. And Fosbury evolved the flop from an existing technique that was popular at one time. So it is possible that there were other high school high jumpers experimenting in the same way, but Fosbury was the first to get noticed because he was the first to win with the flop on the world stage. On the other hand, Fosbury had to put up with a lot of crap. The results from his initial experimentation weren't promising and everyone was telling him to go back to the straddle. I'm guessing that Fosbury only stuck with it because he knew that he would never be world champion using the straddle, and he wanted to be world champion enough to persevere. It would be interesting to know how many other high jumpers in the world were experimenting with radical techniques, and how many of them were close to breaking out when Fosbury did. One in a thousand? One in ten thousand? One in a million? Is there a tipping point when a new normal becomes inevitable?

I don't think that Fosbury had to win the Olympics in order for the flop to become the new normal. Even if Fosbury had only won bronze, I think you would have seen a lot of floppers at the 1972 Olympics. The flop didn't take over because the high jumping establishment instantly saw it as superior; the flop was still received skeptically after Fosbury's win. But enough individual high jumpers (probably the ones in the second tier who realized they weren't going to make it to the Olympics sticking to the straddle) were willing to take a risk and start using it, and they found success. It's when floppers besides Fosbury started winning that the flop became the new normal. What if Fosbury had still won the NCAA championship in 1968, but hadn't made it to the Olympics? Clearly, Fosbury had to win on a stage large enough to encourage a critical mass other high jumpers to try the flop, but how large that stage needs to be is very unclear.

The last world record set with the straddle was 2.35 meters. The current world record set with the flop is 2.45 meters, a difference of less than 5%. Knowing how quickly the Fosbury Flop became the new normal, I was expecting a greater difference. However, in athletics, a difference of 5% is probably the difference between an Olympic champion and a good college athlete, so I shouldn't be surprised. Usually, new normals are obvious in hindsight; that is how and why they become new normals. As a non-high jumper, it isn't obvious to me that the flop would be superior to the straddle. And apparently, there is considerable debate about this within the high jumping community. Some feel that the straddle is as good as the flop, but that the flop is much easier to master. So, while an expert straddler will jump as high as an expert flopper, there will be a hundred expert floppers for every expert straddler.

I think that the world needs a new normal in teaching and learning, and I think that my approach to curriculum design can produce that new normal. I've already done what Fosbury did and refined my process in isolation; I'm just waiting for my Olympic moment. What will it be? How large does it need to be? The student outcomes I see from my curricula are much more than 5% better than what we achieve with our current curricula, but we don't see the same rewards and willingness to take risks to be 5% better, so the deltas need to be greater to reach a tipping point in education. Still feeling my way to next steps, but I like this analogy. I should have a problem description and environmental analysis soon. After that, my goals and strategies. Stay tuned!

Friday, December 27, 2013

Building Understanding

You would think that most curriculum would be designed to build student understanding, but that isn't the case. Many people, including teachers and curriculum designers, continue to advocate procedural learning over concept development. This is partly due to a natural bias to teach the way we were taught, but also partly due to the build up of anecdotal evidence suggesting that concept development is either less effective or less efficient than procedural learning.

How is that possible? How could memorization be a more effective way to learn something than understanding it? Understanding definitely pays off in the long run. Students who rely on memorization eventually get stuck while students who understand what they learn are able to move on to more advanced studies. The problem isn't that building understanding is ineffective, it's that curricula designed to build understanding are ineffective. I would argue that if we were actually building understanding with our curricula, then it would quickly become our new normal because it would be so much more effective than what we are doing now.

Let's consider the laws of exponents:


In a procedural approach, students are given the laws to memorize, and then asked to apply those laws in dozens of practice problems. To avoid interference, each law is typically introduced on a separate day. While students will still have to pick out which law to apply for a given problem on a cumulative test, few problems will require the student to apply more than one law at a time. At most, a student may be asked to simplify an expression like this:


When a procedural approach is taken, the expectation is that students will be introduced to these laws in pre-algebra, but that they won't remember them beyond the test. The laws will then be re-taught in Algebra and Algebra II, and only mastered in Algebra II after students are applying them on a daily basis. This means that we are relying on brute force and repetition to get students to mastery, and we aren't expecting the low-performing or learning disabled students who will never get through Algebra II to master these laws at all.

In a conceptual approach, the laws of exponents are first introduced by deriving them, and demonstrating how and why they work:




The theory is that if you can derive a rule and explain it, then you understand it. However, once a student can do that, the curriculum then returns to the traditional path: each law is practiced in isolation to minimize interference and mastery is not expected until two years later in Algebra II after students are applying the laws on a daily basis. So even though we are teaching for understanding, we are not expecting a different outcome. This is why many people see concept development as a waste of time.

I don't believe that curriculum designers are simply going through the motions or paying lip service to building understanding by slapping a different introduction onto the traditional procedural approach; I think that they don't know what else to do. We want the students to apply the laws of exponents. They understand the laws of exponents (at least well enough to explain and justify them). So what else is there?

The problem is that curriculum designers are conflating the laws of exponents with the task that students need to perform. Because of this, they view the derivation of the laws simply as a bridge to the laws themselves, and once the students understand the laws, they think their job is done. The goal isn't to understand the laws of exponents, but to understand how to simplify exponential expressions well enough to debug things if something goes wrong or to reason through things if something new and unexpected pops up. To get there, the steps in the task needed to be explicit and grounded enough so that students can try things out, and problems need to be complex enough so that students need to try things out. We don't want automaticity and abstraction at this point.

To launch into a complex problem, students need to understand their goal and a set of basic tools and strategies for reaching that goal.


So what does a student need to know and be able to do to perform this task?

The student needs to know that, in order to write an expression in simplest terms, he or she must be able to identify, re-group, and count all of the factors in the expression.

To re-group and count the factors, the student needs to be able to interpret exponents, which means that the student needs to know what an exponent is and apply that definition to write out the factors in an exponential expression using multiplication:


To do that, a student needs to be able to identify the base for an exponent. Our expression has six exponents. What is the base for the second exponent (6)? Is it 3a or a? What is the base for the fifth exponent (3)? If you are fluent with exponential expressions and the laws of exponents, you may be able to pick out bases intuitively, but how would you figure out a base if you couldn't see it right away?

To identify the base of an exponent, you need to apply the rules for order of operations. To make this easier, let's make each operation in the expression explicit (another sub-task that students need to be able to perform that we don't teach):


Now I can tell that the base of the second exponent (6) is a and not 3a because I wouldn't multiply by 3 until after I've evaluated the exponent.

Once all of the exponents have been written out as multiplication, then the factors can be re-arranged (commutative and associative properties of multiplication) and re-grouped using exponents.

While this process may seem more complicated than memorizing and applying the laws of exponents, all we are doing is making every step explicit. If you were to attempt to solve the same problem using the laws of exponents, you would find yourself going through the same process. You may not be aware of some of the steps because they are so automatic and you are doing them in your head subconsciously, but they are there. Making them explicit actually makes it easier for students who don't yet have that level of automaticity. It also makes it easier to catch any errors you may be making.

A common error is treating coefficients like exponents. When students are adding exponents, they instinctively want to add the coefficients:


And when they are multiplying exponents, they want to multiply the coefficients:


Their pattern recognition causes them to hone in on the numbers and to start operating on them instead of thinking in terms of factors. Thinking in terms of the laws of exponents encourages students to go straight to computation. You identify which law applies, and then you add or multiply.

Stepping back and explicitly thinking in terms of factors instead should help students catch and correct these errors:

  1. What factors are there in this expression? A 6 and two a's, and a 3 and five a's.
  2. Can I re-arrange and group any of the factors? I can re-arrange the factors to group the a's, so now I have a 6, a 3, and seven a's.

The laws of exponents don't enter the student's thinking at all except as a short cut for combining two a's and five a's.

  1. What factors are there in this expression? Four 3's, four groups of five a's, and four groups of two b's.
  2. How many a's and b's do I have? Four groups of five a's is twenty a's and four groups of two b's is eight b's.
  3. Can I re-arrange and group any of the factors? Nope.

Again, the laws of exponents only serve as an efficient way to count factors.

When building understanding, we need to be clear on the tasks that we want students to perform. We don't want them to derive or apply the laws of exponents. We want them to simplify expressions containing exponents. When we do that, we uncover critical sub-tasks that students need to be able to do, such as identify the bases of exponents. If a student cannot identify the base of an exponent, the laws of exponents are not going to help, and the student will be unable to complete the task.

The mistake that many curriculum designers make is thinking that breaking down a task represents a form of scaffolding. Scaffolding are supports that we put into place when building a structure, but that are removed when the structure is complete. That is not at all what we are attempting to do. The sub-tasks that we identify through a task analysis represent what students need to know and be able to do; they aren't a bridge to a higher level abstraction that will eventually replace the sub-tasks, but a foundation for reasoning and problem solving that enables abstraction.

Tuesday, December 24, 2013

The Prison of Core Beliefs

There have been and continue to be attempts to establish new normals in curriculum design. None of them have been particularly successful, and I will argue that none of them will ever be successful as long as they fail to change expectations for student performance. Right now, when we teach something, most teachers expect that a third of the students won't get it, a third will get it just well enough to pass the test, and a third will get it and understand it well enough to hold onto it. We think of that as normal.

Every once in a while we will be encouraged to try something different, but when we try something different and the results end up basically the same, it only reinforces what we believed and expected in the first place: some students get it and some students won't, and there is nothing that we can do to change that. If you truly believed that, how long would you bang your head against the wall trying to alter reality before reverting to the status quo? Jumping to a new normal is going to take years, if not decades. And much of that time will be spent wandering alone in a fog with no sense of direction. If we want people to commit to and complete that journey, then we need to arm them better.


Standards


The big push over the past 30 years has been the standards movement. The basic premise of the standards movement makes complete sense: establish standards for what students should know and be able to do, test students against those standards, and use that assessment data as feedback for continuous improvement. The No Child Left Behind (NCLB) Act set a goal for 100% of our students to reach proficiency by 2014, and sticks and carrots were put into place to make sure that the system responded with urgency.

The standards movement has been fairly successful at taking the slack out of the system. Our lowest-performing districts, schools, and teachers got a serious wake up call, and we have now raised the floor on student performance. By taking out the slack, I mean that people are finally doing the things that they should have been doing all along. By redoubling our efforts, we have shifted our classrooms from an expected 33% | 33% | 33% break down in student performance to 25% | 40% | 35%. While that is nothing to sneeze at, it isn't sustainable and we have run out of ways to push performance higher.

To achieve the shift to a 25% | 40% | 35% break down, we threw massive resources at the problem. And what the people on the ground have taken away from all that effort isn't that the 8% that moved from don't-get-it to get-it-just-well-enough-to-pass-the-test are more capable than we originally thought, it's that we only got a few more students over the bar by lifting them up and over it ourselves. Because once we stop lifting them, they will be right back where they started in the don't-get-it camp. Core beliefs have not changed at all.

The standards movement is at a crisis point. We are bringing the common core and a new regime of national tests online, but student performance has largely plateaued and timelines for goals are being pushed back. The big question is: What happens when we can't afford to continue throwing sticks and carrots at the system? Will anything survive or will the old slack return? In order to become a new normal, something has to be so compelling that we keep doing it even after someone stops pushing it. It has to become internalized, and I'm not sure if anything from the standards movement has become internalized. Just imagine what would happen if state testing (or it's replacement) went away tomorrow.


Understanding by Design


The curriculum design process with the greatest mindshare today is Understanding by Design. Again, the basic premise makes complete sense: start with an enduring understanding and then design a curriculum to achieve that enduring understanding. An enduring understanding is intended to be an audacious goal that galvanizes us to bring everything to the table. But it presumes that the tools needed to achieve this goal are known and that iteration is sufficient to find what we need.

One of my favorite television shows is Kitchen Nightmares where Gordon Ramsay visits failing restaurants and helps their owners turn things around. As much as I enjoy the show, Gordon Ramsay has it easy. Models for highly successful restaurants are out there and can be studied. We know what makes for a great restaurant. The same can't be said for great curriculum. This means that, to turn around a curriculum, you need to try a lot of things that have never been tried before and most of which aren't going to work. Even worse, if you do try something that will eventually work, it probably isn't going to work the first dozen times you try it because you are new at it. That takes perseverance.

Unfortunately, what usually happens is that we try a few things, they don't work and we don't come close to reaching our audacious goal, and this ends up reinforcing our core belief that some students will get it and some won't and that this outcome is normal and to be expected. Or we pick a goal that sounds audacious but really isn't. Either way, the status quo is maintained and nothing new is discovered.


Power Standards


Power standards were a popular approach to curriculum design, but they have fallen out of favor, largely because the approach proved to be ineffective. I actually adopted power standards for a few years and still think that they provide a useful framework for thinking about curriculum, but they definitely aren't enough to create a new normal on their own.

The basic premise behind power standards is that some standards are more important than others and that we should focus on the most important ones. Again, perfectly logical. One of the nice things about using power standards is that teachers need to come together to decide which standards are power standards (a rule of thumb is that only one third of the standards are power standards). The criteria for being a power standard are endurance, leverage, and readiness. If well-faciliated, these conversations could lead to a much deeper understanding of the standards themselves.

I like power standards because of the emphasis on leverage. Some standards are frequently leveraged to reach other standards. For example, you need to understand order of operations in order to understand how to simplify expressions. If the curriculum designer understands that and can build that into the curriculum, then simplifying expressions will end up making a little more sense to the student. This is part of what we do when breaking down tasks in a task analysis.

Understanding by Design gives you a distant landmark to shoot for, but no compass. Using power standards, the landmark is a little closer and you do have a rudimentary compass in leverage, but the compass can be very hard to read and there is little driving you forward to brave the fog. How confident are you that leverage will lead you to a place where all students will get it? If you are unsure, it is that much easier to turn around and go back to climbing the mountain you know.


Adaptive Learning


One of the hot trends today is injecting big data into the standards movement. The basic premise is that we aren't closing the loop because: (1) formative assessment data is not getting to the teachers, so teachers aren't able to adapt their instruction to the needs of their students; or (2) formative assessment data is getting through but teachers don't have enough resources at their fingertips to respond at a granular level. Either way, technology is riding to the rescue. Soon, a computer will be able to collect highly detailed formative assessment data from a student, and then match that data against a database of world-class curricula to tailor a learning program for that student in real-time.

Great, except the world-class curricula sucks and will continue to suck until we have a curriculum design process that can take us to a new normal. Where is the world-class curriculum for teaching students to use prime factorization to find greatest common factors and least common multiples? If there is something out there that actually makes sense to students, I haven't seen it. This is why one-on-one tutors and elite private schools end up using the same basic curriculum that everyone else uses: there aren't any better alternatives and more data isn't going to change that. Why isn't there urgency to create something better? Because our curricula is giving us the outcomes that we expect and consider normal.


Student Engagement, Multiple Representations, and Discovery Learning


In the interests of time, I'm going to lump all of these approaches to curriculum design together. Imagine that you are designing a curriculum to help students learn how to use prime factorization to find greatest common denominators and least common multiples. How would any of these approaches help you? Until you've broken these tasks down to sub-tasks that are intuitive or build on sub-tasks students have already mastered, any true form of discovery learning would be impossible and all you'd be doing is leaning on student engagement or multiple representations to help students memorize a procedure. These approaches can't help you until you've taken that fundamental step first.

In many ways, focusing on student engagement first feels like you are putting all of the onus on the student. As the curriculum designer, I'm not going to make the effort to break this down in a way that makes sense to you; instead, I'm going gamify it or put it in a context relevant to you in the hopes that you will be able to overcome the dysfunctional curriculum and make meaning of this yourself. Focusing on multiple representations first feels like throwing stuff against a wall and hoping that something sticks. Do multiple representations, but only after you've done your job as a curriculum designer and found a path I can take to understanding. Discovery learning can be guided or unguided. If it is guided, learning should be guided by a curriculum that makes sense. If it is unguided, I think you are asking a lot from the student to re-create mathematical discoveries it took the human race thousands of years to discover.


The Math Edublogosphere


Finally, there are a group of bloggers that seem to advocate an entirely different approach to curriculum design, one that focuses on mathematical thinking. I say seems to because I've had difficulty engaging with this community. My sense is that they have settled on a new normal that is different than the normal most people are on and also different than the one I'm advocating. One of the most influential bloggers in the group is Dan Meyers. It also sounds like their thinking has been heavily influenced by the Interactive Mathematics Program (IMP), a problem-based math curriculum for high schools. I became aware of IMP when I was student teaching at Brookline High School in the 1990s, but I was not impressed by it.

I may try to blog more of my thoughts on the math edublogosphere at some later point, but I thought it was worth giving you a small taste of what they do. Here are two classic problems that I'm linking to through Bryan Meyer's Doing Mathematics blog.


In this problem, students are asked to investigate the number of squares a single square can be divided into. The squares don't all have to be the same size. Through their investigations, they identify patterns and then develop rules.


In this second problem, students investigate a number of questions, such as: Will the dot always hit a corner? How many sides will the dot hit before hitting a corner? Students are able to generate and investigate their own questions. Eventually, patterns are identified and rules developed.

While I appreciate the kinds of thinking that students are asked to do in these problems, I don't sense any kind of progression (new problems don't build on top of earlier problems and problems don't appear to get any richer or more complex). There also seems to be a separation between these problems and the curriculum these teachers use to meet the standards. While I'm not particularly interested in climbing the mountain this community is climbing, I am curious how they got there. Like I said, reaching a new normal is very hard.


Monday, December 23, 2013

The Mountain and the Compass

I've spent the last three days outlining a process for curriculum design:

  1. Design tasks (performance assessments) that reflect everything that a student should be able to do once he or she has met a standard. Don't hold back.
  2. Break those tasks down into sub-tasks that are either intuitive or build on sub-tasks that students have already mastered. Don't stop breaking down a task until you can honestly say that over 95% of the students should be able to do the sub-tasks.
  3. Design the curriculum with learning experiences and assessments targeting each sub-task. Don't stop refining your learning experiences until you can honestly say that over 95% of the students should be able to do the sub-tasks.
  4. Implement the curriculum and compare actual student performance against your expectations. Rinse and repeat.

Nothing about this process seems particularly earth-shattering, until you realize that we aren't doing any of it. We don't push the tasks we ask students to do; we calibrate them based on what we think they can handle. We don't break tasks down until they are intuitive and build on things that students have already mastered; we look the other way when there are tasks that we know make absolutely no sense to students. We don't base our expectations on detailed analyses of the tasks or the curriculum; we base our expectations on how poorly students have done in the past when saddled with a crappy curriculum. This process reflects a new normal for teaching and curriculum design, a new normal that is designed to create a new normal for student learning.

Creating new normals is hard. It's like jumping from one mountain to another. We are so focused on climbing the mountain in front of us that it is hard to turn back, crossing a deep valley before we can start climbing again on the other side. It is especially hard when you can't see the other mountain, the valley is riddled with impassable terrain and shrouded in fog, and the journey takes many years.

To begin such a journey, you need two things: a distant landmark and a compass. Okay, you probably need more than that, but if you only get two things, those are the two I'd ask for. You may not be able to see the mountain you're heading for, but a distant landmark on the same line will give you a bearing you can set your compass to. You'll need the compass because once you descend into fog, your distant landmark won't be visible anymore. You'll have to rely on your compass to make sure you aren't walking in circles or going in the wrong direction. Fortunately, you'll break out of the fog periodically and be able to check your progress and re-calibrate your bearing against the distant landmark, but it could be months between landmark sightings.

In the curriculum design process, pushing the limits of your tasks and breaking those tasks down into sub-tasks that are intuitive or build on things that students have already mastered are the distant landmarks. You won't arrive at the mountain yet just by doing those two things, but they are way stations in the journey and things that you can see from your current vantage point. Your expectations of student performance is your compass. Low is cold and high is warm. Check your compass constantly and re-calibrate it whenever you temporarily break out of the fog and get the chance to evaluate actual student performance.

So what does the mountain, this new normal, look like? Good question. I'll talk a little bit more about that in future posts, but for now, I've decided to distill the curriculum design process down to these basic elements. For those who know me well, you'll notice that I haven't said anything about immersion or leverage yet. That will come later. These steps are the essential steps and the first steps in your journey. They are essential in the sense that I don't think that you can design a truly effective curriculum without them. I also feel like, if you take on this curriculum design process and apply it with integrity, you will drive yourself to everything you need to know. You'll arrive at those other destinations, and eventually the mountain, once you've designed your tasks and broken those tasks down to the appropriate sub-tasks, and then realize that you still aren't getting every student to mastery. You won't be able to stop until you get there. I know that I couldn't.

Sunday, December 22, 2013

Pushing Tasks in Curriculum Design

Okay, let's review the steps that we have in our curriculum design process so far:

  1. Identify the standards you are targeting. This is pretty much the starting point if you are designing any kind of math or science curriculum for a public school district today.
  2. Identify the tasks that a student should be able to do when he or she meets the standard. These are your performance assessments.
  3. Break down the tasks into sub-tasks. Each sub-task should either be intuitive or build on a sub-task that students have already mastered.
  4. Evaluate the list of sub-tasks to determine how many students should be able to perform the task. You should expect at least 95% of the students to be able to do these sub-tasks. If that is not the case, then return to step 3 and break down the task some more.
  5. Design learning experiences in the curriculum so that each sub-task can be mastered. Provide formative assessments and interventions.
  6. Evaluate the learning experiences in the curriculum to determine how many students should be able to perform the sub-tasks once they have gone through the curriculum. You should expect at least 95% of the students to be able to do the sub-tasks. If that is not the case, then return to step 5 and improve the learning experiences.
  7. Implement the curriculum.
  8. Compare actual student performance to your expectations, including retention and transfer. If student performance does not match expectations, then re-evaluate your assumptions and return to step 2.

In this post, we are going to talk a little bit more about step 2. When designing tasks and performance assessments, you want to include everything that a student should be able to do when they have met a standard. This means pushing tasks as far as you can go.

If a student has learned the rules for order of operations (e.g., PEMDAS) and can apply them, then he or she should be able to evaluate expressions like this one:

5 + 4(27 − 3(9 − 7)^3)^2×(2(8 − 120÷(4 + 10(3^3 − 5^2))) − 1) − (3 + 4)^2

During an open house one year, a parent walked up to me and commented that he had never learned the rules for order operations needed to evaluate expressions like these and that he was super impressed that his daughter was learning them in 6th-grade. This made me smile because there are no additional rules needed to evaluate expressions like these; if you know PEMDAS, then you can evaluate them. In fact, I'd go so far as to say that if you can't evaluate expressions like these, then you haven't met the standard.

According to Richard Elmore, a researcher at the Harvard Graduate School of Education, student learning is bounded by the tasks we ask students to do. If we ask them to evaluate expressions with three operations or less, then that is what students are going to learn how to do, and no more. They won't learn to do more until we ask them to do more.

For expressions with three operations or less, the set of possible permutations is very small. This means that most students use pattern recognition to evaluate these expressions instead of PEMDAS. If they see a single parenthesis in an expression, they know to evaluate it first. If they see an addition operation next to a multiplication operation, they know to evaluate the multiplication first. These students may know PEMDAS well enough to recite it back to you and even explain how it works, but they aren't using it to evaluate these simple expressions. They don't have to.

But give them an expression with over a dozen operations in it and their pattern recognition suddenly breaks down. Now they need to read the entire expression and run through PEMDAS one step at a time. And then they need to do it all over again after evaluating each operation. Over time, they will identify more sophisticated patterns to increase their speed and efficiency, bypassing PEMDAS once again (e.g., going straight for the "inside" parentheses first), but they will have mastered the rules for order of operations by then and they will know what to do when they encounter something they haven't seen before. Over 95% of my 6th-graders are able to evaluate expressions like these less than two lessons after being introduced to PEMDAS for the first time.

6th-grade textbooks avoid asking students to evaluate expressions with more than three operations because the textbook authors don't expect students to really meet this standard. They view this approach as a bridge to eventual mastery. Maybe they mistake pattern recognition for application. The writers of standardized tests make the same mistake and don't assess students against the actual standard. Don't do that. Come up with the most challenging tasks that you can that fit within the standard(s). Just make sure that you are doing your task analysis. When you have nested parentheses, one of the sub-tasks that students will need to be able to perform is to match up opening and closing parentheses. Push your tasks and do your task analyses, and you'll be surprised what students are able to do.

Saturday, December 21, 2013

Expectations for Student Learning

Now that you've performed your task analysis, it is time to set some expectations for how students will do on those tasks. If you've (1) broken the task down into sub-tasks that are intuitive or build on sub-tasks that students have already mastered, (2) provided sufficient opportunities for students to learn and master those sub-tasks, (3) assessed students on their progress toward mastery of each sub-task, and (4) provided interventions when necessary, then you should expect students to do very well.

The reason why setting expectations is important is because it closes a few feedback loops. First, it lets you know if you have taken your task analysis to the appropriate level. If you review the sub-tasks that students need to complete a task and you feel that many students won't be able to master those sub-tasks, then you know that you need to drill down further or redesign the curriculum to bridge some of those gaps. Then, if you implement the curriculum and student performance doesn't match your expectations, you need to go back to re-examine your assumptions or take a closer look at how the curriculum is being delivered.

If you look at how we teach middle school students to use prime factorization to find greatest common factors and least common multiples, it is fairly obvious that neither the curriculum developers who wrote the curriculum nor the teachers who are delivering the curriculum expect students to do very well on these tasks. Students are memorizing sets of procedures that make no sense to them. No one is surprised when only half of the students perform adequately on these tasks for a chapter test or that less than 10% of the students can even recall the procedures a month later. Instead, we choose not to think about it until after it happens.

The same thing happens when we teach area. Typically, 10-20% of students entering 6th-grade are unable to find the area of a rectangle without some review. We consider that normal and wave it away with the explanation that students forget things over the summer or that, especially in the case of students with a learning disability, some students aren't yet developmentally ready to master a skill or concept. These judgments are based on our experiences with thousands of students over decades, but historical data is not what our expectations should be based on. If we expect past results and consider them normal, then how will we ever make the leap to better results?

If our area curriculum is aligned with the Common Core standards, we begin laying the foundation for finding the area of rectangles in 2nd-grade:

Partition a rectangle into rows and columns of same-size squares and count to find the total number of them.

In 3rd-grade, students are formalizing their understanding:

A plane figure which can be covered without gaps or overlaps by n unit squares is said to have an area of n square units.

Measure areas by counting unit squares (square cm, square m, square in, square ft, and improvised units).

Find the area of a rectangle with whole-number side lengths by tiling it, and show that the area is the same as would be found by multiplying the side lengths.

And by 4th-grade, the focus is on asking students to apply what they already know:

Apply the area and perimeter formulas for rectangles in real world and mathematical problems.

So when asked to find the area of a rectangle at the start of 6th-grade, a student should be able to complete the task using these sub-tasks:

  1. Recognize that the area of the rectangle can be found by tiling the rectangle with unit squares
  2. Recognize that the unit squares are arranged in an array and can be counted using multiplication
  3. Recall the area formula for a rectangle
  4. Multiply

The key step in this process is recognizing that the area of the rectangle can be found by tiling the rectangle with unit squares. This recognition is grounded in the concept of area. If the student makes that connection, it should trigger a mental image of a rectangle tiled by unit squares, which should lead into step 2. By now, students should be very familiar with arrays and area models for multiplication, so this should trigger the area formula for a rectangle. Notice that finding the area of the rectangle does not require the student to start by recalling the area formula for a rectangle.

If a student is unable to perform this task, we should break down the task and ask ourselves if the student is capable of performing these sub-tasks. If the student is capable of performing the sub-tasks, then he or she is capable of performing the task, and it is our job to find out why that isn't happening and what we can do to address it. Forgetting how to find the area of a rectangle over the summer isn't normal and it isn't the same as forgetting the area formula for a rectangle. It is feedback that we should be responding to.

Friday, December 20, 2013

Task Analysis for Curriculum Design

Two of the first things I do when designing a curriculum is to define the tasks that I want students to be able to do (backwards design), and then perform a task analysis on those tasks. A task analysis basically identifies the steps or sub-tasks needed to perform the task.

As a curriculum designer, I need to know the sub-tasks required to complete a task because I must build opportunities where students can explicitly learn how to perform each of those sub-tasks. I also need to make sure that the instructor is assessing those sub-tasks and knows to and is able to intervene when necessary.

For example, I am developing an area curriculum, and one of the tasks that I want students to perform is to take a polygon, decompose it into rectangles and triangles, find the area of those rectangles and triangles, and then combine those areas to get the area of the entire polygon. One sub-task that students need to perform is to find the length of a side when the length isn't given. Another is to identify the base and height of a triangle. Neither sub-task is trivial, and if a student is unable to do either of them, he or she will be unable to complete the overall task.

When doing a task analysis, I try to break the task down into sub-tasks that are either intuitive or build on sub-tasks students have already mastered. This highlights what students really need to know and be able to do, lowers the entry points into the curriculum (making it more accessible), and enables students to approach tasks as problem solvers rather than procedure followers.

It is very common for curriculum designers to break down a task to the wrong level. When I was working in schools, middle school students were expected to learn how to use prime factorization to find the greatest common factor (GCF) and least common multiple (LCM) of two numbers. (I find it very curious that prime factorization does not show up at all  in the common core standards, although Massachusetts has inserted their own standard for it into grade 6.)

To find the GCF or LCM of 504 and 675, first you would find the prime factorization of each number:

504 = (2^3) × (3^2) × 7
675 = (3^3) × (5^2)

[I apologize for the spreadsheet notation. Unfortunately, Blogger restricts my use of css, so I can't format the exponents the way I'd like.]

Then you'd compare the exponents for each factor.

To find the GCF of 504 and 675, you want to grab the smallest exponent. 504 has three 2's, but 675 has zero, so we go with zero 2's. 675 has three 3's, but 504 only has two, so we go with two 3's. 675 has two 5's, but 504 has zero, so we go with zero 5's. 504 has one 7, but 675 has zero, so we go with zero 7's. Combining zero 2's with two 3's, zero 5's, and zero 7's gives us a GCF of 9.

GCF = 3^2 = 9

To find the LCM of 504 and 675, you want to grab the greatest exponent, which would be three 2's, three 3's, two 5's, and one 7, or an LCM of 37,800.

LCM = (2^3) × (3^3) × (5^2) × 7 = 37,800

Teachers have a number of strategies for helping their students perform these tasks. One is to break out the number of factors in each prime factorization into a table:

Factors
2 3 5 7
504 3 2 0 1
675 0 3 2 0

Another is to teach students how to find the GCF in grade 7 and then the LCM in grade 8 so that they don't confuse the two procedures.

Neither strategy does anything to make the tasks more intuitive or accessible. But by drilling down a little further, we can break these tasks down to sub-tasks that do make sense.

One of the key sub-tasks is using prime factorizations to recognize when one number is a factor of another. If the prime factorization of 675 is (3^3) × (5^2), can you tell if 3 is a factor of 675? How about 5? 15? 45? 35? 45 is a factor of 675 because the prime factorization of 45 is (3^3) × 5, which is a subset of (3^3) × (5^2). 35 is not a factor of 675 because the prime factorization of 35 is 5 × 7, which is not a subset of (3^3) × (5^2).

Once you can use prime factorizations to recognize when one number is a factor of another, you can easily use them to recognize common factors. 3 is a common factor of both 504 and 675. 6 isn't because 2 × 3 is a subset of (2^3) × (3^2) × 7, but not a subset of (3^3) × (5^2). To find the GCF, keep adding factors to a common factor until you can't add anymore.

To find the LCM, you are going to start by finding a common multiple. By definition, a common multiple will have both 504 and 675 as factors, which means that the prime factorization of a common multiple must contain (2^3) × (3^2) × 7 and (3^3) × (5^2) as subsets.

Does (2^10) × (3^10) × (5^10) × (7^10) contain (2^3) × (3^2) × 7 and (3^3) × (5^2) as subsets? Yes, so it is a common multiple of 504 and 675. To find the LCM, we begin by knocking out factors one by one and using our factor test to see if we still have a common multiple. Eventually, when we can't knock out any more factors, we have the LCM, or least common multiple. This process moves rapidly from random trial-and-error to informed trial-and-error to prediction to generalization and a procedure for finding the LCM.

School Districts in a Business World

I attended a LearnLaunch K-12 Peer Learning Group meeting today where Teddy Rice, Co-Founder and President of Ellevation, spoke about lessons learned selling into school districts. Teddy had a lot of great things to say, but one point kind of stuck in my craw. He said that one of the key challenges is that school districts don't really compete against each other. To expound on this point, he explained that if JPMorgan bought a software program that saved them 1¢ for every $1,000,000 in transactions, you could be sure that Goldman Sachs would go out and buy the same software program tomorrow, whereas you don't see that same behavior amongst school districts.

I wasn't bothered so much by Teddy's statement that school districts don't really compete against each other, but by his conclusion that the lack of competition is what leads to slow or nonexistent uptake of successful innovations or proven best practices from neighboring districts. If that was the case, then you'd expect to see school districts stealing ideas from competing charter schools in their area, which is not happening. School districts compete head-to-head with charter schools and they do things to try to fend those charter schools off, but they don't steal ideas.

What struck me about Teddy's example is that, if a second software program showed up that also helped the bottom line, then JPMorgan and Goldman Sachs would be able to go out and buy the second piece of software without giving up the first. That isn't the case for a school district. For school districts, buying something new typically means giving something up. So unless a new innovation does the same job as an existing tool and can replace it at the same cost, adopting the new innovation will mean giving up an earlier innovation somewhere else in the system. It's a zero sum game and the pie doesn't grow.

JPMorgan and Goldman Sachs are able to keep adding tools to their arsenals, igniting a competitive arms race where steady incremental improvements compound over time, because their revenues and profits are growing explosively. Adopting an innovation that improves your bottom line gives you more resources to invest in the next innovation. If their revenues and profits were only growing at a constant percent each year, they would be considered stagnant and unsuccessful businesses. This is why Microsoft's stock price, despite new highs in revenue and profits each quarter, hasn't budged in over a decade.

This dynamic does not exist for public schools. An innovation that leads to an incremental improvement in student outcomes does not increase the resources you have to invest in the next innovation. This reduces the incentive for investing in innovation, but it also blocks you from making steady incremental improvements even if you are trying to drive forward. This is one of the external hindering factors that I will be analyzing for my strategic plan.

Thursday, December 19, 2013

Strategic Planning


Over the next few weeks, I will be laying out components of my strategic plan for creating new normals for teaching and learning. These blog posts will be a bit scattered, but I will attempt to loosely follow the strategic planning process I learned while pursuing my school administrator's license at Boston University.

In step one, I will describe the goal of the strategic plan and the problem I am trying to solve. This will include both a historical perspective and a description of the current situation. (I suspect that the goal of the plan will shift as I perform the analysis in step two and begin outlining the actual plan in step three.)

In step two, I will analyze the internal and external factors that may hinder or facilitate implementation of the plan. (I haven't quite thought this through, but I'm assuming that the internal factors involve me and my company since we are implementing the plan.)

In step three, I will begin outlining strategies, action steps, and benchmarks of success in a timetable. (Somewhere in step three, I will probably want to go back and revise the work I did in steps one and two. For me, the strategic planning process is cyclical.)

My plan is to focus on my strategic planning between now and the end of the year, but that timing is fluid. I really don't want to begin implementing the plan until the plan has reached a certain level of clarity and cohesiveness. Carving out the next few weeks for strategic planning is just my way of clearing the decks and letting my brain know that it has the time and space to do some deeper thinking.

Midwifery

I have a tendency to hold onto ideas and let them gestate until they are ready to hatch fully formed. Unfortunately, the gestation period can be long and holding a bunch of ideas in my head can be kind of difficult. I started this blog with the intention of trying a new approach. I would stake out new ideas as they came to me and then organize them later.

Things haven't quite worked out that way. I haven't written a blog post in months and my brain is crammed with so many ideas at the moment that I can't even tell what I've got in there. What's put my brain into overdrive? Just the kernel of a strategic plan for my mission in life, a kernel that has been gestating for over twenty years and is almost ready to pop.

Thankfully, every day is a new day and I'm back to try again. I recognize that establishing new habits is really hard and it takes work. With that in mind, I plan on posting every day. Sometimes even multiple times a day. This is my job between now and the new year. Most of these posts are going to be somewhat disjointed, but I need to get them out of my head and on paper before I can begin organizing and fully making sense of them. I'm inducing labor and it's going to be messy, so buckle up! :)

Sunday, September 29, 2013

In Gratitude We Trust

A few months ago, my coach Sarah asked me to start a gratitude journal. I was pretty skeptical about the whole thing. I'm not typically a rah-rah person and I really couldn't see how writing down things that made me grateful would help me in the least. But everything else that Sarah had suggested had worked out pretty well, so I decided to give it a shot.

At first, I treated it like an exercise in recordkeeping. Every evening, I would sit down and rack my brains for moments throughout the day that made me feel grateful. It wasn't that hard to do. I set a quota of three items a day, and I was often able to hit four or five. The thing that frustrated me the most was forgetting to include key moments. The other was repeating the same items. It felt a little like cheating if something made me feel grateful on both Monday and Tuesday.

After about five weeks of dutifully filling in my gratitude journal, I decided to take a step back and assess how the journal was working for me. Even though it felt like I had low expectations going into it, I must have expected something because I was disappointed not to feel some positive, internal changes at work. I knew that Sarah wouldn't have suggested the gratitude journal if there wasn't more to it than this.

When I brought all of this up with Sarah, we put our heads together and thought about how the journal might work better for me. Sarah suggested that, instead of using the journal for recordkeeping, I use it for dipsticking, recording what I was grateful for in the moment. She also said that repeating myself was normal, and that she had gone through a long period where her gratitude journal entries were almost identical day after day. After all, it makes sense that there would be things in our daily lives for which we would be constantly grateful.

Some people will argue that keeping a gratitude journal and imposing a quota on it forces you to make things up or to be artificially positive. I would argue that we are genuinely grateful all the time, but our gratefulness can get covered up by immediate circumstances and negativity. We are not defined by the outermost layer of our consciousness. Digging down to our gratefulness uncovers something real if we are real with ourselves. That is the power of the gratitude journal.

Dipsticking and uncovering the things that made me grateful gave me a nice boost of positive energy each and every day. It reminded me of the good things in my life and gave me a chance to break out of a rut if I was in one. How long that positive energy lasted, I couldn't be sure, but it was nice never going through a whole day completely down in the dumps.

The breakthrough happened early this week. I was feeling particularly down because I had been battling a cold for three weeks and suffering a severe case of writer's block at work. On Sunday, I decided that I would use my gratitude journal to dig my way out of negativity. But when I sat down to write on Monday, I had absolutely nothing. I couldn't feel one ounce of gratitude. And that threw me into a rage... at myself. I had been feeling pretty awesome about myself just two weeks earlier, and here I was with nothing to be grateful for! I started pounding things out in anger on the keyboard, and here is what I wrote for item 3:

I'M STILL HERE AND STILL KICKING! NOTHING IS GOING TO GET ME DOWN!

And for item 7:

Hmmm… I'm up to seven items in this gratitude entry :) Feeling a little better now!

That immediately snapped me out of my headspace, and since then, I've been a lot looser and conversational with my gratitude journal. One positive entry triggers more. My writer's block ended (coincidence?) and my cold has almost cleared up. Planning on getting on my treadmill for the first time in weeks tomorrow. Today, I pulled out my kitchen shelves and cleaned behind them. If you know me, that is huge.

My gratitude journal has changed my life. Thank you, Sarah.

Sunday, September 22, 2013

Mise en Place

There is a dish that I like to prepare that consists of green beans and chicken cooked in a marinated bean curd sauce. The marinated bean curd comes in a jar. To make the sauce, you fish a couple of cubes of bean curd out of the jar and mash them up. This forms the base for a nice, creamy sauce. It couldn't be easier... or tastier. When prepared properly, the green beans, chicken, and sauce meld together perfectly. I also like to add crushed garlic sliced lengthwise, not minced. The garlic isn't browned in oil, but cooked slowly in the sauce so that it becomes soft and mild. Getting a soft slice of garlic is a real treat.

This dish is very warm and homey, and it can be prepared with minimal prep in one sauté pan. However, for some reason, the dish hadn't been turning out quite right for me lately. While all of the individual components seemed fine, they weren't melding. And nothing that I tried was able to fix the problem.

Two weeks ago, I decided to make the dish again, but this time, I would do my mise en place properly. Mise en place is a French phrase which means "putting in place." Chefs use the term to refer to the preparation they do before cooking. As Anthony Bourdain likes to say, "Prior preparation prevents piss-poor performance."

I trimmed the ends off the green beans, crushed and sliced my garlic, sliced my chicken breast, and then mixed the sliced chicken breast with two cubes of the marinated bean curd, making sure that the bean curd was completely mashed up and thoroughly coating the chicken. I did all of this before putting my pan on the stove and heating it. Once the pan was hot, I added a little oil and sautéed the chicken until the slices of chicken breast were cooked through. At this point, the marinated bean curd had formed a thick sauce with the oil and juices from the chicken on the bottom of the pan.

I pulled the slices of chicken breast out of the pan and then added the crushed, sliced garlic. I turned down the heat so that the garlic wouldn't brown and let it cook for about 30 seconds. When the garlic was just starting to cook through and get soft, I added the trimmed green beans (cut into 2-inch long pieces) and stir fried them until they were also coated in the marinated bean curd sauce. I then added water, covered the pan, and brought everything to a boil. The amount of water that you add is crucial. You want to add just enough water so that when the green beans are done cooking, almost all of the water has boiled off and you are left with the right amount of sauce. The cooked chicken gets mixed back into the green beans and the sauce at the very end, and the whole thing is served over rice.

The dish turned out perfectly, and I suddenly realized that not doing my mise en place had been screwing me up. The key to the dish is mashing up the marinated bean curd and sautéing it in oil for a little while, and then making sure that it coats each component. This brings out the flavor of the bean curd and integrates the dish. Because I had been prepping my ingredients as I was cooking instead of doing my mise en place ahead of time, I had been doing everything out of order. I would slice and sauté my chicken breast, and only start trimming the green beans once the chicken was in the pan. The chicken would finish cooking before all of the green beans were trimmed, so I would buy myself some time by pulling the chicken, adding water to the pan, and bringing the water to a boil. Once the water was boiling, I would add the trimmed green beans and stir in the cubes of marinated bean curd.

Failure to do a proper mise en place reflected a lack of caring on my part. I may have been putting food into my stomach, but I was not taking any pride or pleasure in it. I could argue that prepping while cooking is more efficient, but that doesn't really stand up when ten minutes of prep is the difference between an okay dish and a really good dish, or between a rushed and distracted cook and a calm and focused cook. If I really want to multitask while cooking, I can always tidy up my station or wash a few dishes.

In my new way of being, I'm trying to be intentional and in the moment. If I am going to do something, then I'm going to do it and not do fifty million other things at the same time. Part of this is separating work from play, which means being able to turn work mode on and off at will. That has always been difficult for me because so much of my work requires substantial pre-thinking. If I am going to focus exclusively on work in my work time, then I need to come to work prepared to be productive. It also means figuring out ways to trigger my pre-thinking so that I'm not waiting helplessly for it to happen serendipitously.

I have learned a few things about my mental mise en place this month while experimenting with my schedule:

  • Knowing that I have a work period coming up at 4pm causes me to do more focused pre-thinking earlier in the day and even the previous evening. My brain knows that it is going to have to be productive and it wants to be prepared. Forcing my brain to wait until 4pm instead of beginning work immediately builds anticipation.
  • My brain will do pre-thinking for work during my play time, but only when my brain is free to wander. If I am mindlessly doing some dishes, my brain will pre-think, but it'll stop and snap back into the moment if a colorful butterfly flits by the window. It's a bit like solving problems in your sleep.
  • My brain thinks better, including pre-thinking, when I am more active during my play time. Being out-and-about and active during my play time is more energizing in general.
  • The kinds of problems that my brain works on during these unstructured pre-thinking sessions seem to be more practical, or technical, in nature. I'm not making large conceptual leaps. I'm doing the smaller, more focused tasks that I could have been doing whenever I got stuck but tended to put off instead. Hopefully, this is clearing the underbrush for those large conceptual leaps to happen in the near future.

I am still struggling with tasks where progress is iterative, and pre-thinking and doing are tightly coupled. For those kinds of tasks, I feel that the key will be identifying my optimal conditions for pre-thinking and then intentionally carving out time and space for it.

Wednesday, September 18, 2013

Anandtech Benchmarks Apple's 64-Bit A7 SoC

Anand Lal Shimpi over at Anandtech has just weighed in on Apple's 64-bit A7 SoC. Anand is widely acknowledged as the leading expert on CPUs and GPUs among tech writers and reviewers on the web today. You should really read the entire article, but I'm going to focus on his analysis of the A7 and its CPU (code named Cyclone).

Anand believes that the CPU consists of two 64-bit ARMv8 cores with a max frequency of 1.3 GHz. Integer performance for 32-bit code has increased 42% from last year's A6 to this year's A7. Running 64-bit code on the A7 increases integer performance by another 38% (the A7 can run code in both 32-bit and 64-bit mode). This yields an overall increase of 95%. For floating point performance, the corresponding increases are 67%, 36%, and 128%.

Geekbench 3.0.0 Compute Performance
Integer (ST) Integer (MT) Floating Point (ST) Floating Point (MT)
Apple A6 1.3 GHz
(32-bit code)
750 1472 588 1165
Apple A7 1.3 GHz
(32-bit code)
1065 2095 983 1955
Apple A7 1.3 GHz
(64-bit code)
1471 2872 1339 2659

Some of the increases in integer performance can be attributed to new cryptographic instructions for hardware acceleration in the ARMv8 instruction set, but it is fairly clear that Apple has, once again, taken the lead in both CPU and GPU performance. It briefly held the lead last year when it released the A6, but it was quickly overtaken in CPU performance by incremental improvements to Qualcomm's Snapdragon SoC.

I expect Apple to hold onto the CPU performance crown a little longer this year. Qualcomm is still trying to squeeze more performance out of its Snapdragon SoC, but the LG G2 just came out a week ago with Qualcomm's latest and greatest, and it doesn't come close to the iPhone 5s. Meanwhile, Samsung and Nvidia are both using ARM's Cortex-A15 (although Samsung has recently taken an architecture license), which was really designed by ARM for the server market and not the mobile market. It runs extremely hot. Apple absolutely destroys the competition in terms of performance per watt and IPC (instructions per cycle), and its competitors won't be able to catch up by increasing core counts and max frequencies any more. They are all hitting thermal ceilings.

This means that we won't see a credible competitor for Cyclone and the A7 until someone else releases an ARMv8 core. Samsung bragged that they'll have one in 2014. I'm sure that Qualcomm has something in the works, but they haven't announced anything, yet. Like the Cortex-A15, ARM designed the 64-bit ARMv8 instruction set with the server market in mind, and ARMv8 cores for the server market aren't even predicted until 2014.

And this is ignoring the software side completely. Anand confirms that all of iOS and all of Apple's iOS apps are already 64-bit. At boot up, there isn't a single 32-bit process running on the iPhone 5s. That is astonishing. The transition from 32-bit to 64-bit took a lot longer on OS X. In fact, there are a number of key Apple applications on OS X that are still 32-bit.

Saturday, September 14, 2013

Apple's 64-Bit Gimmick

On Tuesday, Apple unveiled its latest iPhones. Powering the iPhone 5s (boy, I really wish they hadn't decided to use a lowercase 's' here) is a new system on a chip, the A7, that Apple is touting as the first 64-bit processor in a smartphone. Predictably, nerd rage ensued.

Zeroing in on the fact that the iPhone 5s is unlikely to ship with more than 2 GB of RAM, internet commentators instantly attacked the 64-bitness of the A7 as a marketing gimmick, something for Apple to brag about even when its users or developers wouldn't actually see any benefits. Drawing this conclusion was somewhat understandable since the main reason cited for switching from 32-bit to 64-bit processors on the desktop was always the ability to address more than 4 GB of RAM. (Of course, the reasonable thing to do would have been to engage in a bit of inquiry and to ask yourself why Apple might have made this decision instead of ridiculing a decision you don't have the technical knowledge to understand — but no one has ever accused internet commentators of being reasonable.)

A number of experts stepped up in an effort to counter this misconception, explaining that the transition to 64-bitness also meant a transition to larger registers and a more efficient instruction set, the ARMv8. A second round of articles were published, but the internet commentators wouldn't hear it. Some people are simply so invested in the idea that Apple is not an engineering company that it is utterly inconceivable to them that Apple could have engineered anything first.

The funny thing is, first is completely irrelevant and the A7 demonstrates exactly how much of an engineering company Apple actually is. Everyone could see that 64-bit processors were coming. It is on ARM's roadmap. Samsung could have done it first, but they bet on the 32-bit Cortex-A15 instead. Qualcomm could have done it first; they certainly have the design chops to roll their own custom core. But if either Samsung or Qualcomm had done it first, it would have been a gimmick, and that's not a double standard.

People have argued that the experts are overstating the benefit of larger registers and the more efficient ARMv8 instruction set because programs would have to be recompiled to take advantage of these features. But while that might be difficult to do on Android, Apple has engaged in a multiyear effort to transition its toolchain from GCC to Clang/LLVM and they have also worked to get the vast majority of their developers on Cocoa frameworks for exactly this kind of flexibility. Apple didn't do these things because it was sexy or made good marketing. They learned from their transitions from Carbon to Cocoa, and from PowerPC to Intel. How much do you want to bet that iOS 7 is already running on ARMv8 and has been for some time? To me, that is good engineering.

When Apple designed the iPad 3 with a retina display, they doubled the size of the battery and tripled the performance of the GPU. Increasing the screen resolution from 1024 x 768 to 2048 x 1536 meant doubling the backlighting in order to maintain screen brightness. Since the backlights account for about 80% of the power consumed by a tablet, the battery had to be doubled to maintain battery life. The iPad 2 already had the best GPU of any tablet, but the iPad 3 would have to push four times as many pixels. Apple was unable to quadruple GPU performance in one year, but they got surprisingly close. This meant that some games would run at lower frame rates on the iPad 3 than on the iPad 2.

In contrast, when Android tablets moved to high-resolution displays, there was little effort to maintain battery life or performance. In other words, there was little engineering. OEMs simply slapped new displays on existing designs. It'll be interesting to see how Android makes the transition from 32-bit to 64-bit processors. Samsung and Qualcomm don't control when Google will build Android on ARMv8. I'm not sure who controls the toolchain app developers use, or how readily or quickly app developers will recompile their apps even if the tools are there. Third-party app developers of all sizes on iOS are already preparing to update and recompile their apps for the new UI in iOS 7. Maybe Apple hasn't been sitting on its ass after all.

Monday, September 9, 2013

Standard-Essential Patents for Dummies

There has been a lot of deliberate obfuscation around standard-essential patents (SEPs) lately, mostly from people trying to defend or minimize bad behavior from Samsung and Google/Motorola. However, there is much less gray area than people realize. Samsung and Google/Motorola simply tried to abuse their SEPs, and broke antitrust laws and their commitments to standard-setting bodies in the process. It is actually quite black and white once you take the time to understand the issues.

Standards are usually established for interoperability. Recently, Blu-Ray and HD DVD slugged it out in the market, with Blu-Ray finally coming out on top. To avoid those kinds of bruising battles, especially when they might slow down the adoption of a technology, sometimes an industry will decide to set a standard and pick a winner itself ahead of time. This occurred in the video industry with the MPEG-4 standard and in the telecommunications industry with the 3G standard. When a standard is set, companies voluntarily contribute their technologies and patents to the standard.

Picking a winner this way is actually illegal since it involves the major players in an industry sitting down together and colluding. While any one player may only hold a slice of the market, collectively they hold a monopoly. Governments around the world have made an exception for standard-setting in their antitrust laws as long as the companies participating in the process agree to license any SEPs (patents required to implement the standard) on FRAND (fair, reasonable, and non-discriminatory) terms. This is to protect any small or new players who wish to implement the standard from getting locked out. (Some standard-setting bodies, such as W3C, go even further and require companies to turn over their patents so that implementing a standard doesn't require any licensing at all.) Again, participating in the standard-setting process and contributing your technology to a standard are all voluntary.

By agreeing to license your SEPs on FRAND terms, you are agreeing to license those patents to all comers and with no exceptions. Because of this, it is completely acceptable for Microsoft to release a product that implements a standard before beginning negotiations with Samsung for use of Samsung's SEPs. Microsoft will be paying Samsung; it's just a matter of when. And this payment will cover the period of time when Microsoft was using Samsung's technology without a license. It also means that Samsung should not be able to seek an injunction against Microsoft, blocking Microsoft from selling its product, before a license has been obtained. Injunctions are only allowed if Samsung cannot be made whole with money from Microsoft. But with SEPs, you can always be made whole with money. Samsung should only be able to obtain an injunction if FRAND terms are established (by courts if necessary), and then Microsoft still refuses to pay. In the current disputes, Apple and Microsoft have both signaled their willingness to pay by posting bonds with the courts.

Now that governments in the U.S., Europe, Japan, and Australia have all come down and decided that injunctions are inappropriate in SEP disputes, and a U.S. court has found that Google/Motorola acted in bad faith by never even attempting to offer FRAND terms to Microsoft before seeking an injunction (to recap: Google/Motorola asked for $4,000 million, Microsoft offered $1.2 million, and the court settled on $1.8 million per year), Android fans are now pivoting and trying to say that Samsung's and Google/Motorola's bad behavior is somehow justified by Apple's and Microsoft's bad behavior.

To demonstrate how wrong-headed this is, I'm going to use a simple analogy. Imagine that a school decides to standardize on a certain type of notebook. To attend this school, a student is required to purchase and use these notebooks, which happen to be manufactured and sold by one of the parents (Mr. Google) at the school. No other notebooks are acceptable. In establishing the standard and awarding the notebook contract, the school stipulates that Mr. Google must sell the notebook to students on FRAND terms.

Another parent at the school, Mrs. Apple, makes and sells hand-crafted pencil cases. These pencil cases are highly coveted, but Mrs. Apple refuses to sell one to Mr. Google. In turn, Mr. Google says that Mrs. Apple can either buy notebooks from him for $4,000 or in exchange for one of her pencil cases. The notebooks are worth approximately $2. Tying the licensing of SEPs to the cross-licensing of non-SEPs is not acceptable, even if Mrs. Apple is being a jerk.

Another parent at the school, Mrs. Microsoft, is suing Mr. Google because she slipped and fell on his unshoveled sidewalk following a snowstorm. Mr. Google says that Mrs. Microsoft will have to pay $4,000 for a notebook unless she agrees to drop the lawsuit first. Using SEPs to defend against lawsuits involving non-SEPs is not acceptable, even if Mrs. Microsoft is being a jerk.

When the court found that Google/Motorola was only entitled to $1.8 million per year instead of the $4,000 million it was asking for, many people said that companies would stop participating on standard-setting committees. This, again, shows a complete lack of understanding of the issues involved. One thing that U.S. courts are still grappling with is the basis for establishing licensing fees for SEPs. The opinion favored by Apple and Microsoft (both own considerable numbers of SEPs themselves and have pledged not to seek injunctions) is that SEPs should be valued based on their pre-standard value. Samsung and Google/Motorola both argue that SEPs should be valued based on their hold up value, meaning that SEPs should be worth more because companies have to use them to implement a standard. Some of the SEPs that Google/Motorola was using against Microsoft involved interlaced video. Interlaced videos on the internet are now very rare and I'm sure that Microsoft would have happily left support for interlaced video out of their products rather than pay Google/Microsoft $4 billion per year, but Microsoft had no choice because handling interlaced video is in the MPEG-4 standard.

Returning to our school notebook analogy, imagine that Mr. Google's notebooks sold for $2 before the school standardized on them. Shouldn't $2 be the starting point for establishing FRAND terms? Of course, conditions change over time and Mr. Google may have to adjust the price if paper prices started going up, but $2 should clearly be the starting point. Do you really think that Mr. Google would have problem selling his notebooks for $2 each to 500 students when he used to sell his notebooks for $2 each to 80 students? Should he really have the option to raise the price of his notebook to $5 once students are required to buy them? Those are the arguments that Samsung and Google/Motorola, and their fans, are making.