Late 2006 MacBook Pro, MacBook Air SuperDrive, VMWare Fusion and 32-bit Windows XP

The basic gist is that they don’t play well together out of the box. You can make them work if you’re willing to do a bit of fiddling and if you’re not afraid of the OS X command line.

The first problem is that Apple decided that the new MBA SuperDrives should only be used in MacBooks that didn’t come with a factory-installed optical drive. You’ll actually encounter two different issues

The first has to do with whether the MBP will actually recognize the drive. You can solve that issue by following the instructions at Use the Apple external SuperDrive on (almost) any Mac. In a nutshell, you need to edit /Library/Preferences/SystemConfiguration/com.apple.Boot.plist and set the empty element to mbasd=1. The file should now look like this:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Kernel Flags</key>
<string>mbasd=1</string>
</dict>
</plist>

If your version of the file already has text between <string> and </string>, insert a space after the existing text, then insert mbasd=1.

and the other has to do with whether you can use it to play a DVD.

http://support.apple.com/kb/DL830

Arduino Series: Working With An Optical Encoder

The Goal

I have an old White 1602 knitting machine that uses a light scanner to produce patterns in the knit fabric. The bed of the knitting machine syncs up with the controller via two obsolete rotary encoders and the stitch patterns are produced as a sequence of pulses causes specific needles to be selected.

The first problem is that the light scanner has a lot of mechanical parts that have deteriorated with age. Parts are no longer available.

The second problem is that the width of the pattern is constrained by the width of the mylar that feeds into the light scanner to product the pattern.

The third problem is that while the light scanner does its job well when it’s functioning, all of its capabilities could be performed more efficiently and accurately by a computer.

My goal is to completely replace the light scanner with newer technology. This post illustrates a prototype for how I might use an optical coder to track the position of the knitting carriage as well as when it changes direction.

Equipment

Arduino Mega 2560 R2
US Digital Optical Encoder S1-1250-I
4 male-to-female jumpers
Electrical tape

About The Encoder

While obsolete, the S1-1250-I encoder is a very capable piece of hardware, but much more expensive than what’s available on today’s market. I used it because I already had one, but the information presented in this post should work with any rotary quadrature encoder. I’ll most likely replace the US Digital with a SparkFun’s COM-11102 1024 P/R Quadrature Encoder I have on order.

About The Approach

There are basically two ways to interface with the encoder: polling and interrupts. A little project I’m playing with will require a considerable amount of accuracy, so I chose to use interrupts as polling might result in missed pulses.

 Wiring

The encoder has 3 outputs: channel A, channel B and index. We’re not going to use index, so we need to make 4 connections — one for each of the two channels, one for power and one for ground. The encoder has raw wires so we need to add pins in order to attach it to the Arduino.

  1. Make sure the Arduino is powered off.
  2. Strip 1/4″ – 3/8″ of insulation from the encoder’s leads for power, ground, channel A and channel B.
  3. Insert the end of each wire into the female end of a jumper and secure with electrical tape.
  4. Connect the power lead to the 5V power pin.
  5. Connect the ground lead to one of the Arduino’s ground pins.
  6. Connect the channel A lead to digital pin 20. This pin is one of the 6 Arduino pins that support interrupts. The other pins with interrupts are 2, 3, 18, 19 and 21.
  7. Connect the channel B lead to digital pin 17.

The Code

/****************************************************************************************

Author:    Brenda A Bell
Permalink: https://www.brendaabell.com/2014/02/arduino-series-working-with-an-optical-encoder/

****************************************************************************************/

#define ENCODER0PINA         20      // this pin needs to support interrupts
#define ENCODER0PINB         17      // no interrupt required
#define CPR                  1250    // encoder cycles per revolution
#define CLOCKWISE            1       // direction constant
#define COUNTER_CLOCKWISE    2       // direction constant

// variables modified by interrupt handler must be declared as volatile
volatile long encoder0Position = 0;
volatile long interruptsReceived = 0;

// track direction: 0 = counter-clockwise; 1 = clockwise
short currentDirection = CLOCKWISE;

// track last position so we know whether it's worth printing new output
long previousPosition = 0;

void setup()
{

  // inputs
  pinMode(ENCODER0PINA, INPUT);
  pinMode(ENCODER0PINB, INPUT);

  // interrupts
  attachInterrupt(3, onInterrupt, RISING);

  // enable diagnostic output
  Serial.begin (9600);
  Serial.println("\n\n\n");
  Serial.println("Ready.");
}

void loop()
{
  // only display position info if has changed
  if (encoder0Position != previousPosition )
  {
    Serial.print(encoder0Position, DEC);
    Serial.print("\t");
    Serial.print(currentDirection == CLOCKWISE ? "clockwise" : "counter-clockwise");
    Serial.print("\t");
    Serial.println(interruptsReceived, DEC);
    previousPosition = encoder0Position;
  }
}

// interrupt function needs to do as little as possible
void onInterrupt()
{
  // read both inputs
  int a = digitalRead(ENCODER0PINA);
  int b = digitalRead(ENCODER0PINB);

  if (a == b )
  {
    // b is leading a (counter-clockwise)
    encoder0Position--;
    currentDirection = COUNTER_CLOCKWISE;
  }
  else
  {
    // a is leading b (clockwise)
    encoder0Position++;
    currentDirection = CLOCKWISE;
  }

  // track 0 to 1249
  encoder0Position = encoder0Position % CPR;

  // track the number of interrupts
  interruptsReceived++;
}

How It Works

Lines 8 – 12 define a few useful constants to make the code more readable. What they do should be obvious from the comments.

Lines 15 – 16 define global variables that will be modified by the interrupt handler.

Line 19 & 22 define other global variables we’ll use inside the Arduino loop.

The setup() function on line 24 configures our channel A and channel B pins for input, attaches an interrupt handler to channel A’s pin and configures the serial port so we can see some diagnostic output. Note that we’re going to interrupt on a rising state change so we know that the state of channel A will always be high when our interrupt is triggered. Using a rising or falling interrupt means:

  • We always know the state of A without having to perform a read: A is always high in a rising interrupt and always low in a falling interrupt.
  • Since we always know the starting state of A, we only have to test the state of B to determine direction and track the current position.

The Arduino loop() function on line 40 does nothing more than print some diagnostic information about what we’re reading from the encoder. To avoid chatter, the loop is tracking current values against previous values to we don’t print information we’ve already seen.

The interrupt handler on line 55 does all the heavy lifting:

  • When the encoder is moving in one direction, the pulse from channel A is leading the pulse from channel B. When the encoder is moving in the other direction, the pulses are reversed.
  • When the state of A and B are equal, B must be leading A, so the encoder is turning counter-clockwise. Otherwise, A is leading B, so the encoder is turning clockwise. Remember when we configured our interrupt to fire on rising? The state of channel A will always be high, so we only need to check the state of channel B to determine direction.
  • By comparing A to B instead of hard-coded constants, we can change the interrupt between rising and falling without breaking the interrupt handler.

The code on line 75 keeps the counter within the range 0 to 1249. This would allow us to compute angle or synchronize the position of the encoder to some other device.

The code on line 78 is an extra bit of diagnostic info we can use to track how many times our interrupt has fired.

Further Discussion

It’s much easier to understand how the interrupt handler works if you understand what’s happening when you turn the encoder shaft and reverse direction.

When you turn the encoder’s  shaft clockwise, A is leading B. This results in 4 distinct transitions that are repeated over and over as long as the shaft continues rotating in the same direction.

AB
HIGHLOW
HIGHHIGH
LOWHIGH
LOWLOW

What’s important is this:

  • The inputs are latched, meaning that when we read B’s value from A’s interrupt handler the value we get is B’s state as it existed at the time the interrupt handler was fired. 
  • The handler is fired when A goes high.
  • When the shaft is turning clockwise, the handler is fired between the first two transitions —before B goes high — so we know the shaft is rotating clockwise when A is high and B is low.

If the shaft is turning clockwise and you stop turning, A remains high and B remains low.

If the shaft then starts turning counter-clockwise, B is leading A. This means that B has to go high before A’s interrupt fires again. Therefore, when both A and B are high, the shaft must be turning counter-clockwise.

Some makers may be inclined to use interrupts on both A and B. Unless you have an application where you absolutely must perform some action between A and B going high in both directions, the second interrupt is completely unnecessary. Interrupts are an expensive, limited resource so it’s wise to only use them when you need them.

References

http://playground.arduino.cc/Main/RotaryEncoders#Example1

Adding a COM port to a Windows Fusion VM on Mac OS X

This article applies to the following:

  • Mac OS X 10.8.2
  • VMWare Fusion 5.0.2
  • Windows XP SP3
  • RadioShack USB-to-Serial Adapter
  • Macbook Pro Retina

Install the Mac OS X drivers:

  • Download http://www.xbsd.nl/pub/osx-pl2303.kext.tgz and unzip to a temporary directory.
  • Open a terminal window and execute the following commands:

  • cd /path/to/osx-pl2303.kext
    sudo cp -R osx-pl2303.kext /System/Library/Extensions/
    cd /System/Library/Extensions
    sudo chmod -R 755 osx-pl2303.kext
    sudo chown -R root:wheel osx-pl2303.kext
    sudo kextload ./osx-pl2303.kext
    sudo kextcache -system-cache

  • Launch System Preferences and verify the drivers loaded properly. Under Network, you should see a device labeled PL2303.

Since Mac OS X and Windows can’t use the device at the same time, you’ll need to unload the kext. Open a terminal window and execute the following command:

sudo kextunload /System/Library/Extensions/osx-pl2303.kext

Note that you’ll need to repeat this command if you reboot your MacBook.

Install the Windows drivers:

  • Download the 260-0183 RadioShack driver from the RadioShack support site. Extract the files to a temporary directory.
  • Use the VMWare menu to connect the Radio Shack USB Device to the VM. Windows will launch the New Hardware applet. When prompted, instruct Windows to install the driver from the temporary directory containing the extracted driver files.
  • Launch Device Manager and verify the RadioShack device appears under Ports (COM & LPT).

Designing Scalable Software

This is the first article in a three-part series inspired by something I read in Scalability Rules: 50 Principals For Scaling Web Sites by Martin Abbott and Michael Fisher:

Design for 20x capacity. Implement for 3x capacity. Deploy for ~1.5x capacity.

Fortunately, designing for scalability doesn’t necessarily require us to change the fundamentals of how we develop software. But it does mean we need to expand our thinking beyond the design characteristics we’d expect to see in the individual software components themselves: a team of the best developers can follow all of the best practices they’ve spent years to master and still develop an application that doesn’t scale. Even if you have an experienced team that designs a flexible robust architecture, there are still no guarantees with respect to how the application will scale.

Everyone plays a role

The responsibility doesn’t lie solely within the engineering team. Whether you’re a developer, a product manager or a member of the executive team, we all would rather be part of a successful software project than an unsuccessful one. Being responsible for a project that attracted 1,000,000 users instead of 50,000 is a problem all startups dream of. It’s also a problem that quickly turns into a nightmare if each and every member of your organization isn’t committed to unlimited growth.

It is absolutely critical to have all of the stakeholders agree on the priority of scalability:

♦ An application that was not designed to scale will only scale by accident. This kind of fortunate accident is extremely rare.
♦ Scalability issues are never a priority until they become a customer issue in a production environment.
♦ Scalability issues almost always occur when the resources most qualified to deal with them have limited bandwidth.
♦ Solving a scalability issue beyond the design phase will be done so at a significantly higher cost and usually at the expense of another high priority project.

 

Use design patterns appropriately

Unlike technology, design patterns are not concepts that are constantly evolving. There is and will always be a limited number of logical ways in which components can interact with each other. The operative word here is logical. You must be able to manage the physical relationships and dependencies outside the logical boundaries of the major components you use to construct your application. This is the best means of ensuring that we do not inadvertently introduced a tight coupling where none should exist.

Apply best engineering practices rigorously and consistently

Not only would it be impractical to provide a comprehensive list, but all best practice idioms do not necessarily apply to all software development projects. I’ve included the mantras I most often apply to my own designs, but it’s up to you to determine the core set of design principles that best support your goals.

♦ Algorithms should be as complex as they need to be but as simple as they can be.
♦ A useful software component does only one thing and does it really well.
♦ Collaborating software components are loosely coupled.
♦ Third-party software components are used sparingly and appropriately. These components must be even more loosely coupled than the components we develop ourselves.
♦ A software component must expose a well-defined contract and well-documented unambiguous behavior.
♦ A software component increases its refactoring potential by keeping its algorithms private.

 

Apply the eight principles of Service Oriented Architecture

Service Oriented Architecture (SOA) addresses many of the fundamental principals that are necessary to developing a scalable application, but scalability itself is not SOA’s primary focus. In fact, SOA is often achieved at the expense of performance and scalability and may introduce an even tighter coupling in the release process for dependent shared services. The key is to apply these principals intelligently, taking extra precautions to avoid unwarranted and unnecessary side effects that might impede scalability.

Quantify for maximum success and multiple your best guesses by a factor of x

One of the questions I’m sometimes asked by one of the developers I work with is how much load do we expect? My answer is usually something along the lines of show me your design and tell me why you think we need to worry about it. The fact is that it can be difficult to predict growth for a new innovative application. amazon.com‘s retail business is a perfect example.

I had the pleasure of listening to Jon Jenkins relay Amazon’s early history at an AWS Summit a few months ago. Few people are aware of the fact that while Amazon was emerging as a successful online book store, they were running the site on three servers sitting in a closet. Shortly after what JJ referred to as a water event in 1988, they moved to a more reliable data center where they continued to grow both functionality and customers at a rapid pace.

Fast forward to 2005 when the Amazon engineers realized that their architecture would not support continued growth. If they neglected to address scalability, there would be no business. They spent the next six years migrating amazon.com to Amazon Web Services, turning off the last physical web server in November, 2010.

Design for the cloud

Of all the propositions discussed here, cloud computing is probably the biggest enabling factor. The great thing about cloud computing is that it really doesn’t ask us to discard any of the good design habits we’ve already acquired. It simply encourages us to expand the scope of how we apply those principals.

Let’s take a simple example: I need to develop an application that will read data from a database, apply a series of analytic operations and write the results to a database where it will be subsequently be consumed by a reporting application.

Identify the technical requirements

Defining the requirements for the project extends well beyond the scope of your customer’s business needs. You also need to consider all of the factors that will impact your ability to deploy and support the application, as well as sustain growth without impacting performance.

♦ What are the limitations of the database I’m using? Are there built-in bottlenecks I need to worry about such as page or table level locks?
♦ How fast is the input data being generated and how quickly do I need to consume it?
♦ Is my database subject to high transaction volume?
♦ Are there other applications competing for significant I/O in the same database?
♦ Are there user-facing applications consuming the database in ways that may be impacted by a new data consumer?
♦ How much data am I going to read, process and write in one complete iteration? 10Mb? Or 10Gb?
♦ What factors determine the data volume and do I have control over any of those factors?
♦ What are the events that might impact transaction volume, I/O and data volume?
♦ Is it possible I’ll encounter unpredictable spikes for I/O and transaction throughput.
♦ Are the analytic operations CPU intensive?
♦ Are the operations time intensive?
♦ Do the operations have external dependencies that introduce latencies I can’t control?
♦ Are there events that result in unpredictable spikes in the number of operations I need to perform?

 

Not only do I have to quantify the requirements for my new application, I need to have a complete understanding of the reporting application that’s going to consume my data:

♦ What triggers the report generation? Are reports dynamically generated and displayed in a UI when a user clicks a button? Or are they generated in the background where they can be downloaded at some future point in time?
♦ How often are these triggers fired?
♦ Are there events that result in unpredictable spikes in the report frequency or size?
♦ How much data is consumed by the typical report?
♦ How much data is consumed by the worst-case report?

 

Identify proximity requirements

Once I have a complete understanding of how I might expect my application to be used, I can begin to make intelligent decisions about the proximity of my dependencies. If my new application expects to read a million rows, I probably want my application to live in close proximity to the database. If my reports are dynamically generated and presented to the user via a web interface, I can take advantage of the fact that users can visually consume a limited amount of data at one time — I can tolerate a considerable distance between the web application and reporting data without sacrificing the perception of good performance.

Mapping proximity requirements will generally be an iterative process during which natural system boundaries will start to emerge. These boundaries will highlight the areas where we can take advantage of distributed deployment opportunities as well as the areas where we need to minimize if not eliminate proximity requirements altogether.

Use the AKF Scale Cube to measure the scalability of your design

A well-designed application will be able to scale equally well in any of three dimensions:

AKF Scale Cube

AKF Scale Cube (see References)

The X Axis represents our ability to scale horizontally, otherwise known as scaling out. Simply put, this is our ability to split our load across multiple instances. The ability to efficiently deploy an unlimited number of instances of my application means that I can always accommodate unexpected growth or load. In fact, the optimal solution would allow me to do any of the following with the least amount of time and effort:

♦ Deploy multiple instances to the production data center.
♦ Deploy multiple instances to different administrative domains, i.e., one or two permanent instances running in the production data center with additional temporary instances deployed to an EC2 server via Amazon Web Services.
♦ Dynamically acquire and deploy to AWS EC2 spot instances when the application detects unusual spikes in load and transaction volume.

 

The Y Axis represents the degree to which we’ve applied best practices and SOA. A system of smaller, more focused services almost always exhibit the characteristics usually associated with successful applications:

♦ Distributed functionality is usually well-encapsulated, easier to understand and easier to maintain.
♦ Services that collaborate via well-established contracts are generally more robust.
♦ Elastic architectures allow individual services to be improved independently.

 

The Z Axis indicates how well we will be able to support big data. Even the simplest application can be subjected to huge volumes of data generated by external events outside our control. The degree to which we can distribute data and transactions will determine whether our design imposes undesirable limits on growth.

♦ ♦ ♦

Next time, we’ll take a deeper dive into how we apply these design principals to the implementation of scalable software.

About the author

Brenda Bell has been actively involved in software design and implementation for nearly thirty years. She is currently employed as a Software Architect at Awareness, Inc. and lives in Henniker, NH.

References:

SOA Principles
amazon.com’s Journey to the Cloud
AKF Partners: Splitting Applications or Services for Scale
Scalability Rules: 50 Principals For Scaling Web Sites

Twitter Tools Hack

[Edit: Now that I have all of my widgets configured and things structured the way I want them, I discovered one or two flaws in what I originally posted — like mangled tweets in RSS feeds. I’ve included the revised tweak below the original.]

When I was looking for a good WordPress theme, I stumbled across Wu Wei by Jeff Ngan. It’s a beautiful theme he uses for his Equivocality site. What I loved about the home page was that he has a nice mix of blog posts and tweets — all complete with little twitter icons.

That triggered a Google search for something that would give me more powerful, but easy to use WordPress/Twitter integration. Twitter Tools is really packed with functionality, but there were just a few things that didn’t quite work the way I’d like them to. I really wanted the tweets in my blog to look like tweets.Continue reading

4 Ways To Improve Technical Content On The Web

No matter how many books I read, the Internet is still my primary source of technical information.
Time is my most precious resource.
Wasted time is something I can never recover.

 

It’s frustrating to Google for technical content and to spend time reading pages of information that seems to be pertinent, only to discover that it’s completely irrelevant — because it doesn’t apply to my environment or it’s out of date. What’s more annoying is that this scenario seems to be more of the rule rather than the exception.

Rules 1 and 2 describe what you can do to help your readers assess whether your post is relevant to what they’re looking for. Rules 3 and 4 discuss tactics that will simply make your contributions more valuable.

Rule #1: Date stamp your posts

Change is the one thing you can count on… and in the world of software engineering, you can expect those changes to happen pretty quickly. Information that may have been highly relevant a year ago could be completely irrelevant today. Your reader has no means of assessing the currency of your content unless they know when it was written.

Rule #2: Provide a complete context

Many of the modern technologies we use to develop applications change significantly from version to version. If you’re contributing knowledge that applies to a specific version of Visual Studio 2010 SP1, that information should be prominently displayed near the top of the page. Include as much context as possible. If you’re describing a scenario that exists when running Visual Studio on Windows XP, you should say so. If you don’t know whether the information applies to other OS or application versions, you should say that as well.

Rule #3: Get a peer review

If you’re an inexperienced writer or writing in a language that’s not native to you, ask someone to review what you’ve written for accuracy and readability. I once worked with a very smart programmer who had a terrible time with negation terms in the English language. As a result, we spent countless hours misunderstanding each other because what he said was the opposite of what he meant to say. Your readers won’t necessarily have the opportunity to ask questions about what you meant. If your writing is grammatically correct and easy to understand, they won’t have to.

Rule #4: Avoid using obscure links

Many web sites have been known to restructure their pages, leaving a lot of dead links and bookmarks in their wake. If you use links like MSCDEX May Not Detect Disk Change and the referenced page disappears, your reader still has a title they can Google for to find the new location. If you use links like click here, the only remaining context is the URL itself which is usually not enough to track down the broken reference.