Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts

Saturday, February 21, 2009

Weight Loss And The iPhone

I had a few extra pounds after the holidays. OK, a few more than a few. But not as many as some previous times. I'll leave my efforts to get off the diet roller coaster for another post (current thinking), but suffice it to say that I have once again managed to shed some poundage.

In late December I realized that I was going to have to buy a new wardrobe if I didn't do something. Sure, I used to have clothes one size up (and two sizes up and three sizes up...), but I got optimistic and threw them out after the last time I lost weight.

So my plan was to do the diet journal thing again. Set a goal, calculate a calorie budget, write everything down in a Moleskine Cahier and generally keep myself on track.

Then it occurred to me that I have a shiny new iPhone. Like a Moleskine Cahier and Space Pen, it can always be with me. Unlike a Moleskine it has the potential to do portion size calculations and easy numerical tracking. Plus since I am a gadget freak using the iPhone adds considerably to my motivation.

So I downloaded most of the apps in the App Store that I thought might do what I want. What I want is:
  • Ability to track calories by meal
  • A database of foods to replace my trusty Calorie King book
  • Ability to track weight over time
  • Ability to add my own foods to the database
  • Ability to track just calories (so I don't always HAVE to add something to the database)
I eventually settled on iShape. It does all of these things except that the database isn't complete by any means and doesn't replace Calorie King. It has some other nice features too. You set a goal weight and activity level and it calculates a target calorie limit for you and estimates when you will meet your goal. You can customize many elements (like set your own daily calorie target) and you can add in the effects of exercise and track your waist measurement and BMI. Plus tons of features that I never used (track water, fat, protein, exercise, etc).

I also added WeightBot for some additional weight tracking because I really like the user interface. And the sounds. Slick. But it lacks the one graph view I really want (start to finish), so it ended up not being quite what I wanted. iShape does it all.

Anyway the iPhone is a pretty good system for the diet journal approach. I always had it with me. I always had the ability to google for things that weren't in the database. I could snap a photo of a meal if I didn't feel like estimating it right that moment. It is slower than a notebook to add new foods, but much faster to deal with your favorites. The calorie tracking graph in iShape kept me honest and allowed me to try to make up for some bad days with some good days. The weight tracking (in both apps) kept me motivated as I saw the changes.

If you are a nerd and you have an iPhone and you are overweight (are those three things completely redundant?), give it a try. I guarantee calorie tracking works if you commit to it.

I hit my goal this morning. I have officially been under the "overweight" line for a few days, but it feels really nice to actually hit the target.

I hit my goal just in time to absorb the extra calories from my fresh homemade doughnut breakfast this morning. Did I mention that I got a deep fryer? All things in moderation.

David


(for anyone reading this who has never met me, those numbers are correct - I'm 6' 7")

Saturday, January 24, 2009

Programmer's View of Self Checkout

Jeff Atwood has a blog post up comparing the open source software model with self-service check out lines at the supermarket.

But as a developer, that is not what I think about when I use the self-service lines (which I almost always do).

I think about how farking awful the software is.

I shop at one of several local Super Stop and Shop stores that have mobile scanners. You scan your Stop and Shop card at the entrance on a rack of mobile scanners. This presumably identifies you as a trusted (or at least registered) customer. A mobile bar code scanner lights up, you pull it off the rack, and away you go to do your shopping.

This is great (even with the software complaints) because I can bag my groceries into sturdy reusable bags as I go and push my whole cart through at checkout. In effect this lets me parallelize bagging/checkout with my shopping, which saves me huge chunks of time. When you are usually shopping with a tired 3-year old, you seriously want to spend the minimum time possible in the store. This process is made even more attractive by the fact that they have let go almost all of their dedicated baggers, so normal checkout is now considerably slower than it used to be.

Every once in a while you get "audited" by a clerk who enters a special code and scans several items to make sure that you didn't slip anything in to your bags. Even this doesn't take too long if the clerk wasn't halfway across the store when they get the page to audit you.

I have three major problems with the software:
  1. The scales are slow
  2. The checkout is slow
  3. There is huge disparity between stores
First the scales. The way the store deals with the problem of having to weigh produce is you weigh it at a special scale, then print a bar code, attach it to your bag of broccoli, scan the bar code, then drop it in your bag. Elegant and simple.

OK, but how do you look up "broccoli"? Well, there is a search screen and you can start typing "b-r-o" and as you type a set of icons will appear that match your input. This is a nice UI design. But you have to wait 4-5 seconds between each letter typed. It does not cache your typing so it won't catch up, and there is no feedback that your typing is pointless.

Think about this problem space for a moment. There are at most a few hundred items in the produce section. Even on most embedded systems you should be able to fit the whole searchable database in RAM. Even if you store it in a horribly inefficient way. How on earth can this be so slow? I am reasonably confident that I could write a vastly superior search implementation on my 2002 era java enabled phone. My blackberry and iPhone could both do this without breaking a sweat.

Fortunately there is a shortcut. If you know the PLU code you can enter that. I occasionally skip purchases if I can't easily find the PLU code, though. It's just not worth the pain.

OK, on to the checkout. When you are done shopping you go to one of several special lines and scan a special barcode that signals that you are done shopping. This triggers the unit to start downloading data (or notifies the central system if the downloading happens as you shop - whatever). You place the scanner in a holder by the checkout stand and scan your Stop and Shop card at the register.

The register then starts ringing up your order. One. Item. At. A. Time. At about a second or a second and a half per item. What? They have all this data for what you purchased at their finger tips and it has to go this slow? Think about this problem space for a moment. Is this a different problem from recalculating a spreadsheet? No, it isn't. How would you feel if your spreadsheet took 1-1.5 seconds per line to recalculate? You would throw your computer through the window - that's how you would feel.

OK, now my last problem - disparity between stores. A little over a year ago I went to a different local Stop and Shop and used the same system. It isn't as close to me, but it is near a favorite liquor store and a Starbucks so I find myself in the neighborhood from time to time needing to do some shopping.

Well, imagine my surprise and delight when I got to the register and my entire shopping cart rang up instantly. Not. One. Item. At. A. Time. Sweet! They fixed the bug.

It has been over a year and my store still hasn't updated their system. What? Who does Stop and Shop hire to do IT project management? What are they thinking? If you have a known bug that is likely to drive people nuts and you have fixed it, for goodness sake put it out there for your users. You look like a total idiot if you don't.

David

Thursday, January 1, 2009

My New iPhone

Everything interesting about the iPhone has probably already been written, but I got a new iPhone for Christmas and I am going to make a few comments anyway.

I am totally hooked on my iPhone and I wish I had gotten one sooner. I didn't because I couldn't convince myself that I wanted an iPod on my phone or a phone on my iPod.

I had it all wrong. The iPhone is the best mobile computing platform ever and it just happens to have a phone and an iPod. They are almost incidental.

I owned my iPhone for two days before I even plugged in the earphones. It is a great iPod, but doesn't hold a candle to my 120g classic for capacity and variety. I don't really like the idea of sucking my phone's battery with music, either.

But I am doing most of my personal email, most of my RSS reading, and most of my web searching/surfing and social networking on the iPhone.

The keyboard took a little getting used to, but I type almost as fast on the iPhone as I do on my blackberry now. I typed this entry with it. (iBlogger)

It is also a great gaming platform. Fieldrunners isn't just a "great game for the iPhone." It is a great game. Period.

It isn't a replacement for a sketchbook, but one can do simple doodles and contour drawings on it. (No. 2)

It is a brilliant calculator. Sci-15c and i41CX+ are both excellent apps. The fact that I feel the need for both probably says a lot about me - but I'd rather not examine that too closely. :-)

I have been spoiled for a while by having a blackberry with GPS and Google Maps and I am no longer willing to live without that functionality. The iPhone is better.

It is a great Twitter client - I am using Tweetie for almost everything Twitter.

It is a brilliant wifi locator (WiFinder).

Plus there are lots of totally new things you can do. Mobile Pandora is cool. As are SnapTell and midomi.

And in a pinch it is even a kitchen timer, diet aid, eBook reader, white noise generator, binaural beat machine, clock, flashlight, level, or a ruler.

I use my computers less now.

David

Thursday, September 11, 2008

Developer Best Practices

I have been thinking about best practices at work and I am going to throw some ideas out here to help me think them through and to get feedback if any of you have opinions. This isn't complete or well organized - I am just trying to get the juices flowing.

  • Do not rely on this or any other document to provide a complete list of best practices. There are too many and some seem too obvious to talk about. You are a professional software developer - learn the craft. Don't stop learning. Aim for 10X.
  • There is no absolute "best" way to do anything. Think about what you are trying to accomplish. Be pragmatic.
  • Design applications in layers. Create distinct modules or blocks of function. Design the module interfaces first. Design by contract.
  • Write unit tests.
  • At a minimum write unit tests of the module interfaces.
  • Ideally write the tests before the code.
  • If you are about to debug something, stop and write a unit test for it first.
  • Don't create mindless unit tests. The auto-generated ones your IDE makes for you are only stubs - don't rely on them for coverage.
  • Inject your dependencies as a matter of course. This will greatly simplify unit testing. You don't actually need a framework for this - understand the underlying principle instead.
  • Keep stuff in source control (SVN, CVS).
  • Do frequent check-ins.
  • Comment your commits.
  • Don't rely on your IDE to do builds. Have a build script instead.
  • Have an independent environment where the build script can be run that is not a developers personal space. Ideally do continuous integration here, but at a minimum regularly check that this build environment still works. Use this environment to build releases.
  • Settle on a layout style for the team. It doesn't matter which style, it matters more that everyone is consistent. Or just agree to do it differently and people can reformat their code using their IDEs. But make sure everyone is on the same page.
  • Optimize code only when you have demonstrated that it performs poorly. Make a regular practice of benchmarking modules. Run the benchmarks with your unit tests so you know when something gets really out of whack.
  • Employ regular peer review of some kind. Internal code reviews. Pair programming. Whatever works for your team. This is how bad code gets found and how developers learn to get better.
  • Also have occasional external code reviews that involve more than just the immediate team.
  • Encourage zero code ownership. Everyone owns all of the code. Anyone can change/fix anything. Any bug is everyone's responsibility.
  • Use tools regularly to check for common bugs/problems. If I review your Java code I am going to run findbugs on it - so you should too.
  • Minimize local configuration information in multi-tier applications. An ideal java midtier should only need to know one thing - a JNDI datasource from which all other configuration information can be derived. An ideal thick client should get global configuration from the server.
  • Always close resources. Take full advantage of try/finally blocks.
  • Don't reinvent the wheel. Use the core libraries of your platform. If you are writing in Java and you think you need to write a sort routine, or a string to date routine, or an array copy routine: STOP. Find good adjunct libraries and use them.
  • Have a well thought out strategy for handling exceptions and logging. All errors should be logged, preferably once. The log entry should give enough information to lead back to the root cause. It should be time stamped. It should contain a stack trace if applicable. It should have a context appropriate message that can be understood by someone not intimately familiar with the code.
  • Do your release cycles faster. I don't care how fast you are doing them now, go faster.
  • It is OK to make pragmatic short term compromises in your code, but make sure you go back and fix them very soon. Do not fall into the boiled frog trap.
  • Eat your own dog food. If you design a service interface and never actually use it, chances are it will actually be unusable in some way. The same for a user interface.

Recommended reading for best practices:
Any thoughts out there?

David

Tuesday, August 19, 2008

On JavaFX

JavaFX has the potential to be a really cool technology, but it has a little ways to go yet.

At work we are trying to figure out what we want to do in the Rich Internet Application space and I was asked to review JavaFX to ensure that it could perform one really critical niche function - that it can be used with existing libraries to display chemical structures. The simple answer is that it can. I had no trouble at all writing very thin plain Java wrapper classes around ChimePro, Marvin Beans, and JMol so that I could use them in a JavaFX script.

The more complicated answer, though, is that there are still lots of gotchas.
  • The API has been in a state of flux, so code examples that you can google up tend to be unusable (ignore anything dated 2007 or that has an import for javafx.ui.*).
  • The API is still fluxing, so you can expect lots of changes soon. One critical bit that relates to my Swing component test is that the base display (container) classes are changing. To get the cool "drag from the browser" behavior you need to start with the Application class (or maybe Applet) and add Nodes to the Scene, but to use Swing components you have to use the older containers (that work fine for a JNLP application). And you can't easily subclass Node to fix this yourself because it is an abstract class that has an abstract method that returns an object in the com.sun.* hierarchy - something you probably shouldn't mess with.
  • There isn't a lot of good information out there. There is one book. It is a decent book, but it tried to hit a moving target so it isn't perfect. Combine the book with the author's blog and you have one good source of information. The API documentation and related Sun documents is the other good source of information, but it is still far less complete than, say, the Java documentation.
  • Netbeans integration is not complete. It is really well done, it just isn't quite finished. The visual constructor/previewer is brilliant, but there isn't support yet for fixing problems (e.g. fixing imports) - you just get a red (!) and you have to figure out why (and the compiler messages aren't really all that informative ("I got confused by...")).
The language itself, though, is kind of neat once you get past the declarative style. It makes it easier than plain Java2D to create spiffy new graphical content. But you can't do anything in JavaFX that you can't do in Java2D if you know what you are about.

One of the central powers of JavaFX is that it can use any Java library, but that might also be its downfall, especially if it gets widely adopted by mediocre Java programmers. One of the reasons that Applets have gone the way of the Dodo bird is that people had a tendency to bloat the jar downloads to the point that the Applets were slow and finicky and painful to use. The same is certainly possible with JavaFX. Don't do this. Keep the presentation layer as thin and clean as you possibly can.

The allure of a declarative scripting language for rich application development that harnesses all of the power of the Java platform is undeniable and I will continue to play with the technology as it matures. But it isn't quite ready for prime time yet.

David

correction: Where I said Scene I should have said Stage. My memory tricked me. I am referring to javafx.application.Stage.



In answer to a question in the comments, here is an example of how to turn JMol into a JComponent. Note that although you don't see long lines in the blog, when you copy/paste the text you seem to get it all. I'm not sure why and I am open to suggestions for better ways to include code.



package fxstructurerendertest;

import javax.swing.*;
import java.awt.*;
import java.awt.event.*;
import org.jmol.api.*;
import org.jmol.adapter.smarter.*;


public class JMolDisplay extends JComponent implements ActionListener {

private static final long serialVersionUID = -5404266974773735194L;
private JmolViewer viewer;
private JmolAdapter modelAdapter;
private JPopupMenu popup = new JPopupMenu();
private JCheckBoxMenuItem dotsItem = new JCheckBoxMenuItem("Dots");//dots on/dots off
private JCheckBoxMenuItem spaceItem = new JCheckBoxMenuItem("Spacefill");//spacefill on/spacefill off
private JCheckBoxMenuItem ribbonItem = new JCheckBoxMenuItem("Ribbon"); //ribbon on/ribbon off
private JCheckBoxMenuItem colorAminoItem = new JCheckBoxMenuItem("Color Amino");//color amino/color NONE
private JCheckBoxMenuItem colorChainItem = new JCheckBoxMenuItem("Color Chain");//color chain/color NONE
private JCheckBoxMenuItem bondItem = new JCheckBoxMenuItem("Bonds");//wireframe 25/wireframe off - SELECTED
private JCheckBoxMenuItem atomItem = new JCheckBoxMenuItem("Atoms");//cpk 25/cpk 0 - SELECTED
private JMenuItem resetItem = new JMenuItem("Reset");//reset

public JMolDisplay() {
modelAdapter = new SmarterJmolAdapter(null);
this.setPreferredSize(new Dimension(100, 100));
viewer = JmolViewer.allocateViewer(this, modelAdapter);
viewer.setJmolDefaults();
viewer.setColorBackground("BLACK");
this.addMouseListener(new PopupListener());
bondItem.setSelected(true);
popup.add(dotsItem);
popup.add(spaceItem);
popup.add(ribbonItem);
popup.add(colorAminoItem);
popup.add(colorChainItem);
popup.add(bondItem);
popup.add(atomItem);
popup.add(resetItem);
dotsItem.addActionListener(this);
spaceItem.addActionListener(this);
ribbonItem.addActionListener(this);
colorAminoItem.addActionListener(this);
colorChainItem.addActionListener(this);
bondItem.addActionListener(this);
atomItem.addActionListener(this);
resetItem.addActionListener(this);
}

@Override
public void actionPerformed(ActionEvent e) {
Object c = e.getSource();
if (c == dotsItem) dots();
else if (c == spaceItem) spacefill();
else if (c == ribbonItem) ribbon();
else if (c == colorAminoItem) amino();
else if (c == colorChainItem) chain();
else if (c == bondItem) bond();
else if (c == atomItem) atoms();
else if (c == resetItem) {
if ((e.getModifiers() & ActionEvent.ALT_MASK) != 0) {
ScriptDialog d = new ScriptDialog();
d.pack();
d.setVisible(true);
} else reset();
}
}
private void reset() {
runScript("reset");
}
private void atoms() {
//log.debug("cpk toggle");
if (atomItem.isSelected()) runScript("select all;cpk 100");
else runScript("select all;cpk 0");
}
private void bond() {
//log.debug("bond toggle");
if (bondItem.isSelected()) runScript("select all;wireframe 25");
else runScript("select all;wireframe off");
}
private void chain() {
//log.debug("chain color toggle");
if (colorChainItem.isSelected()) runScript("select all;color chain");
else runScript("select all;color NONE");
}
private void amino() {
//log.debug("amino color toggle");
if (colorAminoItem.isSelected()) runScript("select all;color amino");
else runScript("select all;color NONE");
}
private void ribbon() {
//log.debug("ribbon toggle");
if (ribbonItem.isSelected()) runScript("select all;cartoon on\ncolor cartoons structure");
else runScript("select all;cartoon off");
}

private void spacefill() {
//log.debug("spacefill toggle");
if (spaceItem.isSelected()) runScript("select all;spacefill on");
else runScript("select all;spacefill off");
}

private void dots() {
//log.debug("dots toggle");
if (dotsItem.isSelected()) runScript("select all;dots on");
else runScript("select all;dots off");
}

public JmolViewer getViewer() {
return viewer;
}

class PopupListener extends MouseAdapter {
@Override
public void mousePressed(MouseEvent e) {
showPopupIfNeeded(e);
}

@Override
public void mouseReleased(MouseEvent e) {
showPopupIfNeeded(e);
}

private void showPopupIfNeeded(MouseEvent e) {
if (e.isPopupTrigger()) {
popup.show(e.getComponent(),
e.getX(), e.getY());
}
}
}

public void runScript(String script) {
viewer.evalString(script);
}

class ScriptDialog extends JDialog implements ActionListener {
JTextArea scriptArea;
ScriptDialog() {
scriptArea = new JTextArea();
scriptArea.setPreferredSize(new Dimension(200,200));
this.setDefaultCloseOperation(JDialog.DISPOSE_ON_CLOSE);
this.getContentPane().setLayout(new BorderLayout());
this.getContentPane().add(scriptArea, BorderLayout.CENTER);
JButton runButton = new JButton("Run");
this.getContentPane().add(runButton, BorderLayout.SOUTH);
runButton.addActionListener(this);
}

@Override
public void actionPerformed(ActionEvent e) {
runScript(scriptArea.getText());
}
}

/**
*
* @param structure
*/
public void setStructure(String structure) {
viewer.openStringInline(structure);
}

/**
*
*/
@Override
public void paint(Graphics g) {
Rectangle rectClip = new Rectangle();
g.getClipBounds(rectClip);
viewer.renderScreenImage(g, this.getSize(), rectClip);
}


}


Sunday, August 3, 2008

Yes, I Think Your Project Sucks

I am a software/systems architect in a very large corporation for which software and IT are not core business. In this role and previous roles I have frequently been called upon to participate in project code reviews.

When I perform these reviews I usually generate pages of notes and constructive criticisms that I share with the project team. I don't pull punches in these reviews. If I spot some code that is smelly I will highlight it and explain why it is wrong. I will call out shoddy code organization and poorly thought out build scripts and stupid package naming and bad error handling and crappy unit tests and useless comments and other details ad nauseum.

But all of these details are the trees. What makes me like or dislike a project is the overview of the forest. To be sure, one element is the sheer number of details that I have to call out. An overgrown forest stands no chance. Here are some other higher level things that I look for:
  • Evidence of a professional developer attitude
  • Evidence that the end goals are understood
  • Evidence of higher order design/architecture

Evidence of a professional developer attitude

Show me that you have a process. That you care about the quality of the project and the code. That you care what other developers think. That you are trying to set a good example. Some indicators:
  • Clean, well organized repository
  • Repository is up to date and has regular check-ins (with comments!)
  • Clear build scripts that serve as documentation for the build process and dependencies
  • Unit tests and TDD thinking (even if you don't do TDD)
  • Continuous build (or at least nightly)
  • A single central build environment (NOT your IDE, doofus!)

Evidence that the end goals are understood


Show me that you understand the user. That you aren't going to show the user error messages like, "Database Error: Code 5387". That the user interface doesn't freeze or time out. That you have processes in place so that you can quickly debug problems in production. If this is an API, show me that it is clear and understandable and that the contract utterly explicit. Some indicators:
  • A system for handling errors/exceptions that is logical, that logs details, that presents the user with a message that is clear and simple.
  • Asynchronous UI processes
  • Sensible timeout logic where appropriate
  • Integration/unit tests that monitor task time and alert when changes to the code may have slowed something down
  • For APIs do the error messages contain the extra details that a developer would need?
  • Do you eat your own dog food?
  • Are you logging in such a way and at an appropriate level of verbosity such that you can troubleshoot unexpected problems?
  • Documentation for any APIs. Yes, documentation. If you can't clearly explain in a document how to use your API, then what good are you (or it)?
Evidence of higher order design/architecture

Show me that you have thought about reusability. That you understand the value of layered abstractions. That you understand what an object is and what a process is. Every time you look for a library to do X for you and don't find it, do you then write a decent library to do X? Or do you just hack some code to do X into the middle of some poor unsuspecting class? Some indicators:
  • If you are writing a service that does Y, do you have a well orchestrated class/library that does Y, upon which you have layered a service? (as opposed to building the functionality directly into the service classes/methods)
  • Are complex tasks broken down into small logical slices? (as opposed to long confusing methods)
  • Are things that could/should be reusable actually reusable? (as opposed to having unnecessary interdependencies with the rest of the project)
  • Are your objects named like (and function like) nouns?
  • Are your method and object names indicative of actual behavior?
  • Is the project an assembly of small pieces that fit neatly into a bigger picture? (as opposed to a monolithic beast)
It isn't shocking to me that I have never seen a project that scores perfect on all of these indicators. Perfection should be rare. But it is shocking to me how many projects I have seen that score zero.

David

Saturday, August 2, 2008

Back Up Your Online Content!!

It is kind of ironic that I am writing this post now as I have just been doing research for myself on how best to do the opposite (put my back-ups online on e.g. S3).

It recently came to my attention (via a poker blogger who was irate that Tao of Poker got locked) that some blogs on Blogger got improperly locked by a spam detection bot. I did a little investigation to see how big the problem was and it is big. Really big. Here is a small sampling of what I found:
a b c d e f g h i j k l m n o p q r

Google has admitted that this is a mistake here and here. Fair enough. Mistakes happen. But what if it hadn't been a mistake? What if it happened and you were on a three week vacation and were offline? What if you lost all of your writing?

This got me thinking about how much I have been trusting online companies with my content. Probably too much given that they have absolutely no obligation to me at all. The only thing keeping me "safe" is the character of the folks actually working at Google et al. and the corporate fear of bad publicity. If my gmail account contents were lost I would be devastated. Possibly crippled.

So I did some long overdue back-ups. And you should too.

The best way to back up gmail is to turn on POP3 support with the "Enable POP for all mail" flag set. Then fire up a POP client and sync. It actually takes multiple syncs apparently because Google throttles the number of messages you can get in one go, so make sure that you have it all.

The best way to back up your Blogger content is to formulate a link like this:
  • http://twentyonetimestwo.blogspot.com/search?max-results=10000
Where you replace "twentyonetimestwo" with your blog name and "10000" with a number that is more than the total number of posts you have written (if necessary). Then do a "Save Page As" from Firefox. If you choose "web page, complete" as the type you will get your images too.

Do it now.

David

Wednesday, July 30, 2008

SOAP and REST and Tools and Consequences

Jeff makes some interesting points about SOAP and REST. SOAP is probably dead. R.I.P. But I want to take the conversation a step further.

He points out that vendors haven't really jumped on the REST bandwagon and I agree - they haven't. But I think that this is a Very Good Thing (TM).

First of all REST is simple enough that (especially on the client side) you don't really need tools. This is where a lot of the utility comes from.

Secondly and more importantly REST hasn't beaten SOAP because the protocol is massively superior. REST has beaten SOAP because of SOAP's complexities and incompatibilities. But it isn't the protocol that is complex and incompatible, it is the tools!

If you want to build a successful service protocol I think you have two choices:
  1. Build something simple enough that normal inconsistencies can easily be dealt with on a project by project basis. (e.g. REST or basic HTML request/response)
  2. Build something that is absolutely specced out to the nth level of detail so that there is absolutely no ambiguity and produce several reference implementations (e.g. IDL/IIOP)
Anything in between these two extremes is doomed. I claim that tools (especially mediocre tools) could, if widely adopted, push REST out of category 1 and into the middle territory. Either that or the tools would die. Either way I don't see the long term value and I hope that the vendors stay away for a while longer.

David

Tuesday, July 29, 2008

You Make PB&J Wrong

How can that be, you ask? All you have to do is take two slices of bread, spread some peanut butter on one slice, some jam on the other and slap them together, right?

That might be acceptable if you consume it immediately, but otherwise, no, you aren't doing it right. If you pack your lunch, you will very much regret your casual attitude towards PB&J.

I won't even go into the advanced math required to get the amount of filling right. Suffice it to say that you want as much filling as possible without any squirting out when you bite.

What I really want to talk about is the long term effects of jam on bread. It isn't pretty people.

jam + bread + time = goo.

Over the centuries there have been countless attempts to solve this problem, but most people don't even bother. This is wrong. But then most of the well known solutions aren't ideal. My grandmother used to spread room temperature butter on the jam side. This nicely solves the jam and bread problem, but unfortunately the result is no longer actually a PB&J sandwich, it is a PBB&J sandwich.

But there is a (nearly*) perfect solution. Spread a very very thin layer of creamy peanut butter on the jam side slice of bread before applying the jam. Really, this works great.

Why do I care how you make PB&J sandwiches? I don't. Not one bit. I don't even care if you agree with my solution. I am using PB&J as a metaphor.

But I'm not going to tell you what the metaphor is. That is an exercise for the reader.

I will, however, make the claim that if you read all of that and actually thought about whether that was a good solution to the problem, then you just might have the capacity to be a good software designer.

David

* - The solution is only nearly perfect because it slightly changes the slipperiness of the bread and the advanced filling calculations have to be redone.

Friday, July 25, 2008

Klatte's Law

I was just listening to the 6/30 episode of TWiT (yes, I am way behind in my podcasts - blame my audible.com addiction).

This is the episode where they talk about Bill Gates' retirement and the contributions he made to the world and the industry. Their conclusion was that his biggest contribution was a strategy based on an understanding of Moore's Law. The thesis is that he understood that if you release something as soon as it will run (even if it runs badly), Moore's Law will bail you out shortly and that is more efficient than trying to make something that runs tight in the first place. I agree that this strategy was a huge part of the success of Microsoft and it was clearly both the correct strategy in hindsight and a forseeably correct strategy given Moore's Law.

But I think that strategy is failing now and I think that this is a huge part of the reason why Vista is tanking. It doesn't have much to do with why Vista is not of interest to me (that is more around DRM and a closed mentality), but I think it is a big part of the greater market failure.

Why is the strategy failing? I have to introduce a concept for a minute here and for the purposes of this article, let's call it "Klatte's Law". Klatte's Law states that at any moment in time, a user's computational needs can be represented as a gently sloping linear increase. This will stagger up as the user discovers whole new categories of computational need (i.e. if someone who just does word processing starts doing video editing, there will be an enormous jump, but both before and after the jump it is a gentle slope).

Klatte's Law applies to mass market computing, but not to specialized niches like high performance computing.

If you were to plot a zoomed in view of Moore's Law plotted with Klatte's Law, it might look something like this:



The strategy of ignoring tight code in favor of the depending upon Moore's Law works as long as you are to the left of the intersection of Moore's Law and Klatte's Law, but it starts to break down as you approach the intersection and fails completely to the right of the intersection.

Fundamentally, I think that the market as a whole is currently to the right of this intersection. This could change at some point if a killer app comes out that requires tons of computational power (but this killer app will not be an operating system). You can depend on Moore's Law to bail you out if you think you have that killer app, or if you are so far to the left of the intersection that the user already wants to upgrade their system.

But if you do not have a killer app and if the user is satisfied with their current system, you are toast.

"What do you mean I need a quad core machine with twice as much ram to run Vista well? My core2 duo machine does everything I need just fine!"

Microsoft understands Moore's Law, but they don't understand Klatte's Law at all.

David

Saturday, June 7, 2008

Hacking GPS and Google Maps

OK, maybe not quite hacking. Still, this is a compendium of things that I have learned about my Garmin Nuvi 770, GPSes in general, and tools that work with them, especially Google Maps. Since half the people I know seem to have a new Nuvi, maybe someone will find some of this useful.

Extracting current data
The current favorites, routes, and so on are always available for export from a file called current.gpx. You could use this data to transfer favorites between units, or to convert favorites to custom POI, or to show your routes in Google Earth, or whatever. A very handy bit of data to be able to get to. As far as I can determine, this file is created exclusively for the purpose of having a generic way to export data so don't edit it. It is my guess that the generation of this file is what is happening when the progress bar is visible after plugging the Nuvi into a computer.

Data Transfer
Almost all of the data that is in the unit can be transferred to an SD card if you want. Including the built-in data. I've been told you can even transfer the maps if you rename them, but I haven't had the courage to try that. Moving data works great with added content, including MadMaps. And of course you can always just put it on the SD card in the first place for things like MP3 and Audible. This is useful because the Nuvi 770 doesn't come with a huge excess of memory.

You can also delete unused language files and other crap from the unit entirely.

Back everything up, of course, or you are an idiot.

Adding Routes
You can add gpx files to the gpx directory, then when you turn on the unit tell it to import the file. These will go into your routes.
Tools/My Data/Import Route from File

GPX file format and Garmin
Garmin has a bunch of extensions to the gpx standard that are worth knowing about. If you format your data this way you can get custom POIs with phone numbers that can be dialed, custom proximity alerts, and so on.

POI display
Custom POI, are never displayed in 3D map view on the Nuvi 770. To see custom POI, you have to be in a 2D view (like "track up") and you have to be zoomed in pretty close (300 ft). Built-in POI are only displayed in map browse view (that might not be the right term, but the view you get when you tap the map and it shows a flat 2D view that doesn't move). Favorites are always displayed, but there is a limit to how many you can have. My wife's StreetPilot has an option to always display icons for custom POI - I miss that option on my Nuvi. It is always fun to drive into Boston and see the approaching cloud of Starbucks.

Google Maps Routes to GPX (with turn by turn routing)
Use this simple javascript tool. This is extremely handy for creating custom routes since you can drag Google Maps routes around until they follow the roads you want. For scenic routes or motorcycle routes or the like, this is your friend.

Google Maps Result Points As Waypoints (favorites, not custom POI)
Use the Google Maps "Send To" function. This requires that you have the Garmin Communicator installed. There is a description of the whole procedure on the Garmin site and a video on YouTube.

Creating custom POI with Google Maps
One of the best ways to create custom POIs is with Google Maps using the My Maps functionality. Suppose that you have a hankering to try out some diners, so you go to Google Maps, center it on your home, and type "diner" into the search box. You pick interesting looking diners from the results, click on the flag on the map and click the "Save to My Maps" link. You might end up with something like this.

The Automated Method

You can automate a lot of the work with TakeItWithMe, which I only discovered after I figured out the whole process. Even using this tool, you may want to clean up some of the text, but it does a decent job.

The Manual Method (for the curious)

Notice that the My Maps page has a link to "View in Google Earth". This is a link to a KML file, but unfortunately it is an indirect KML file. Right-click on this and save a local copy. When you open it in a text editor you will see a line something like:
<link>
<href>http://maps.google.com/maps/ms?hl=en&amp;ie=UTF8&amp;oe=UTF8&amp;msa=0&amp;msid=102437472669334442804.00044ee9bbc7ace6c7aeb&amp;output=kml</href>
</link>
This is a link to the real KML file, but the link is XML encoded, so you have to clean it up by changing "&amp;" to "&" and so on. You can use the cleaned up URL to get the real KML file.

There are a number of tools that can convert that KML file into something useful. I convert it to a csv file so that I can do some clean up of the text. The combination of a spreadsheet (e.g. Excel) and a text editor that does regular expression search and replace (e.g. jEdit) will make quick work of whatever changes you need to make. Keep in mind that you usually have to swap longitude/latitude columns because the Garmin POILoader expects them in a specific order. In this exercise you are aiming for something that looks like:
-71.6122820,42.2680170,"South Street Diner","40 South St, Westborough, MA 01581 : (508) 870-0101"
-71.4331050,42.2753870,"Lloyd's Diner","184 Fountain St, Framingham, MA 01702 : (508) 879-8750"
-71.1579900,42.3710520,"Deluxe Town Diner","627 Mount Auburn St, Watertown, MA 02472 : (617) 926-8400"
-71.0761800,42.3370780,"Mike's City Diner","1714 Washington St, Boston, MA 02118 : (617) 267-9393"
-71.0576100,42.3496970,"Boston Diner Inc","178 Kneeland St, Boston, MA 02111 : (617) 350-0028"
-71.7924040,42.2603340,"Kenmore Diner","250 Franklin St, Worcester, MA 01604 : (508) 792-5125"
-71.2841490,41.9438210,"Morin's Diner","16 S Main St, Attleboro, MA 02703 : (508) 222-9875"
-71.4843220,41.9914170,"Patriots Diner","65 Founders Dr, Woonsocket, RI 02895 : (401) 765-6900"
-71.5169070,42.1428030,"Ted's Diner","64 Main St, Milford, MA 01757 : (508) 634-1467"
-71.1780010,42.2340240,"50'S Diner","5 Commercial Cir # 101, Dedham, MA 02026 : (781) 326-1955"
-71.0669330,42.3269120,"Victoria Diner","1024 Massachusetts Ave, Boston, MA 02118 : (617) 442-5965"
-71.2310410,42.3770640,"Wilson's Diner Inc","507 Main St, Waltham, MA 02452 : (781) 899-0760"
-71.1220090,42.3907700,"Andy's Diner","2030 Massachusetts Ave., Cambridge, MA 02140 : (617) 497-1444"
-71.8648150,42.1164820,"Carl's Oxford Diner","291 Main St, Oxford, MA 01540 : (508) 987-8770"
-71.4258650,41.8242380,"Haven Brothers Diner","72 Spruce St, Providence, RI 02903 : (401) 861-7777"
-70.9469680,41.8953630,"Dave's Diner","390 W Grove St, Middleboro, MA 02346 : (508) 923-4755"

Now just create a custom icon (if you want) and a custom alert tone (if you want) and load it up with your other custom POIs.

Hacking Sites Using Google API
There are lots of sites these days that are using bits of the Google Maps API. They often have a KML file at the heart of the map they are displaying and you can get at that by looking at the page source. Here is an example from roadfooddigest.com. If you look at the source for that page, you will see a reference to this xml (kml) file.

You can obviously directly use this KML file to create POI if you want to, following the manual method I just described.

But you can also do more cool stuff with Google Maps. Copy the URL for the KML file onto the clipboard, open up Google Maps, and paste the link into the search bar. Go ahead, I'll wait. Neat, huh? Believe it or not, this works with Google Maps Mobile on the blackberry as well.

You can also create a My Map based on KML data (create a new map, click the import link, and paste the KML URL or browse to the KML file).

David

Wednesday, May 28, 2008

The Dark Side of REST?

My organization is considering a move from a SOAP based SOA to a RESTful SOA. To be sure there are advantages and disadvantages to both, but I just encountered a problem with REST that I hadn't really pondered before.

I have used RESTful APIs before, but only to play with. I have messed with the Google Docs API using both REST (from Java and J2ME) and the Java libraries and done some other playing around. But I have never used a RESTful API in anger before. I have never really used one to solve a problem.

Well, I just did. And I found the experience unexpectedly enlightening.

The problem is a typical one for a geek like me. I recently got a book, Hamburger America, that has detailed reviews of 100 or so of the best hamburger places in America. Being a GPS owning geek and foodie I of course wanted to turn that into a custom POI file for my GPS so that I would always know when a truly great hamburger might be nearby.

I wanted to make sure I had a workable process before I spent a bunch of time typing in 100 addresses. So I grabbed the addresses of a bunch of highly rated burger places in the NE from the Phantom Gourmet and sent them to a couple of batch geocaching sites.

Alas, for whatever reason I got zilch. The stupid batch web geocaching apps don't do much in the way of user friendly error reporting. But they all use the Yahoo Maps API, so I figured I would simply write a little app myself if I thought it could be done in under 15 minutes.

10 minutes later I had a working Java Swing app that does batch conversions of addresses to latitude/longitude. That says a lot about the simplicity and power of REST, doesn't it? (It also says a lot about how easy it is to use Netbeans 6, but that's another post).

But the code I wrote was awful. Really horrid stuff. I hard coded everything that I shouldn't and I did the parsing without XML libraries using really basic String matching. It was easy and fast and it works great, but the tiniest change to the library and the whole thing will fall over.

Anyway, if you haven't guessed, the observation I made was this: it is extremely easy, to the point of being by far the path of least resistance, to write bad client code for a RESTful service. It is much easier to write bad code against a RESTful API than against a SOAP API (because the SOAP library, which is all but required, will do the heavy lifting for you).

We have our share of bad code lying about - we don't really need to be encouraging anyone to write more of the stuff. What can we do to prevent or discourage bad coding practices for RESTful APIs?

David

Monday, April 28, 2008

Why I Hate Livejournal

(cross-posted to my Livejournal page)

Pretty inflammatory subject for a first post, isn't it? They'll probably ban me.

But I absolutely HATE that I can't get RSS feeds from friends private journals. Do the developers at Livejournal pay any attention at all to Web 2.0 trends?

Listen up people! If I can't read it in Google Reader, then it doesn't exist.

David

Monday, March 10, 2008

The Big Rip Is Over

Whew. I finished the big rip project. 121 gigabytes of Apple Lossless files (5215 tracks), piles of clean up and de-duplication, and a bunch of manual hunting for cover art, but it is done. Was it worth it? I'm not really sure. I had some hope of actually using some of the lossless music on my iPod, but 121 gigs? I think not.

The one advantage is that if I change my mind about the lossy format that I actually want to use (192kbps MP3 for now), I have everything tagged and organized so that it is pretty easy. Just the processor time to chug through it all again.

My one gripe is that I had to fight against iTunes in a number of places. Everything is possible, but not necessarily easy. It would be very nice, for example, if you could force imports to go to a different directory than the main iTunes directory. Or if you could display (and sort by) the file location. Or if you could use iTunes to move physical files around. Or if you could delete music from playlists. Or if when you do a conversion it asked you if you want to keep duplicates (like it does when you rip a disk). Or if you could have it touch every file in your library to find the missing files (and then if you could sort by the [!] or something). Or if you could swap back and forth between two or more different iTunes directories and then merge them at some point (I got to a place where I had to finish the entire process before I could sync my iPod or things would go horribly wrong - it would be nice if I could have done some of the fiddling in a separate environment).

On the other hand, some things iTunes got very right. Smart playlists are a godsend. The search function is almost magically good. And now that I have actually filled in my album art, cover flow is pretty neat.

And speaking of filling in album art - I never could have done that without amazon.com and google image search. Not a chance. The iTunes missing album art function only hit about 50-60% of my collection - that's what I get for being eclectic I guess.

One interesting thing about the whole process is that it took 24 hours running at ~75% CPU utilization on a modern dual core computer to transcode the files from Apple Lossless to MP3. That is an astonishing amount of CPU time on a computer that would have been classed as a supercomputer a few years ago and magic a few years before that. I estimate it used about 3E14 CPU cycles. A circa 1990 Cray Y-MP would have taken around 625 days to do this processing (obviously not counting any of the optimization that you definitely would have taken the time to do and using a 1:1 assumption about CPU cycles that probably isn't strictly accurate).

David

Friday, March 7, 2008

sshfs rocks!

I have no idea how I went for so long without knowing about sshfs, but for the record it rocks. It is a *nix user space mountable file system that works over ssh as a proxy for sftp. Or in english, it is a way to mount anything that is available via ssh (sftp) as though it were a disk on unix/linux.

Yes, it works on macs. No, it doesn't work in cygwin. Yes, there is a windows client, if you don't mind paying for it (I haven't tried this).

Sshfs, plus an ssh terminal, plus a tiny bit of port redirection, and hey, who needs a vpn? The completely brilliant thing is that you never need anything more than sshd running on your "server" side and you never need anything but port 22 open. Ever.

David

Friday, February 29, 2008

The Big Rip

In the wake of the realization that my music collection has gone the wrong direction, quality-wise, I have decided to re-rip my entire CD library in Apple Lossless. I expect this to keep me busy for the foreseeable future.

Why apple format instead of flac? Because iTunes supports it natively (and it is a nice ripping environment) and the format is playable in Linux and convertible to other formats. The thing that sucks about any format is that there is no one perfect choice. MP3 is the only real generic choice for lossy, but there is no similarly universal format for lossless. So I will need a minimum of two copies of my entire library to support my existing audio needs. Possibly three. Sigh.

I don't actually expect the ripping to be the really painful part of the process. I expect incorporating the lossless items into iTunes without losing anything or duplicating anything and without hopelessly wrecking my playlists to be the hard part.

The way my luck is going lately, my hard drive will probably crash halfway through the process. I should get a Drobo, but I want a Mac to attach it to. Or I could take a chance and attach it to linux (formatted hfs+), I suppose...

48 disks down, lots and lots to go.

David

Monday, January 28, 2008

Google Sets, Text Mining, and Enterprise 2.0

I was browsing on Google Labs as I do from time to time to see what the alpha-geeks are up to.

I had never looked at Google Sets before and when I looked at it this time I almost immediately dismissed it as useless. I mean who cares if I can create sets of things? But then I had the idea to type in some bands that I like to see what happened (the starting set was peaches, shiny toy guns, dresden dolls, goldfrapp, and the knife).

It instantly popped up with a long list of bands many of which I know and like and some that I have never heard of. It's the ones that I had never heard of that got me thinking.

This is exactly the kind of problem that pharmaceutical scientists are trying to solve every day. They have a bunch of things that they know are related and they want to find the other things that are related that they don't know about. But the text mining tools that they use to do it are very expensive and painful to use.

This set interface is so simple. So intuitive.

I imagine that the algorithm that Google Sets is using is some kind of basic co-occurance test, so there are lots of tools out there that are more sophisticated. On the other hand I didn't get any hits for sharpening stones, so it has to be at least a little more than that.

If everything inside the pharmaceutical firewall (or better inside+outside) could be indexed into a tool like this would it be useful? Yes, I think so.

It seems like a big problem, but it is a tiny problem compared with the one that Google has apparently already solved.

David

Wednesday, January 23, 2008

Change my clipboard?

Despite the fact that Jeff Atwood has some cool stuff on his keychain (and therefore obvious geek credibility), I am not entirely certain that I agree with his recent clipboard post.

His thesis is basically that it is about time to add some more functionality to the clipboard because, well, it is about time.

I am not so sure though. I would definitely want any new clipboard functionality to be very well thought out. Sometimes the power of an idea (clipboard, unix pipes) comes from its simplicity.

More often than not I find myself desperately wishing that I could remove functionality from my Windows clipboard. It used to be that every application supported a "paste special" function that was always available and worked smoothly, but that is no longer the case. I guess people found it confusing or something. Even applications that support "paste special" often have it greyed out at times when it would make sense for it to be available, or it has no good keyboard shortcut (in MS Word to paste plain text it is "alt-e, s, up-arrow, enter" - not exactly convenient. There are options if you want to go to some effort).

More than 99% of my copy/paste operations between applications are either pure unformatted text or image. To get the text functionality that I need I have to run PureText or keep a notepad window open, so that I can paste text there and re-copy it sans formatting. I would be very happy if only unformatted text and images worked between applications.

I don't want it to be more complex. I want it to be simpler.

On the other hand I can't find anything wrong with the idea of adding ClipX functionality. But I can already do that by installing ClipX (and I probably will if it doesn't conflict with anything else I run). Why change the operating system?

David

Sunday, January 13, 2008

Hiring Programmers

This is a great list of things to look for when hiring a computer programmer.

This, of course, should not be confused with a list of things to look for in a new hire in general. You have to add all of those on top (or you'll be mighty sorry).

These are the things I look for now when I interview someone, but I have never been this explicit about it. Maybe I'll make a checklist.

David

(here is an old related post)

Monday, September 24, 2007

Printer Economics

My venerable HP Deskjet 500 has finally died. I have had it since the early 90's I think. It was only the second printer I ever owned after my old Epson LX-80 dot matrix printer (which would probably still work just fine, actually, but whose output was no longer acceptable - even in the bad old days of the early 90's).

By died I don't mean that there is anything major wrong with it. The thing is a tank (like all things HP used to be). By died, I mean that little things have started to go wrong and it is most likely not worth the trouble and expense to fix it. The sheet lifter doesn't always work on the first try and for some reason the ink cartridges are clogging with depressing regularity.

I have two other printers. An Epson Stylus Photo R2400, which is completely brilliant for photography, but isn't really a general purpose printer, and some generic POS from Lexmark that I got for free when I bought my last computer, which is, umm, temperamental and very slow for basic printing.

The kind of stuff that I tend to print these days is very different from when I bought my old Deskjet. I want to print google maps and web content. Things that are mixed color and black and white. I want the color bits to look reasonable and the black print to be perfect. The only kind of printer that really meets this need is a color laser printer.

So I started looking at low end color laser printers to see if I could afford them yet, and lo and behold, I can. I settled on the HP Color Laserjet 2600n, which when I ordered it had a $100 instant rebate in effect, making the total 299.99. Less than half of what I paid for my Deskjet 500 way back when.

The cartridges that come with this printer are full new cartridges (not the crippled "teaser" cartridges that some manufacturers try to slip you) that should print around 2500 pages (mixed duty) before they need to be replaced.

Now here is where it gets funny. A set of replacement cartridges costs $323.96. That's right, more than the printer. So this printer is disposable. I can't believe that I live in a world where color laser printers are disposable.

The total cost for this printer works out to about 12 cents per page, which isn't bad.

I feel really sad for the environment, though. This kind of economic model is criminally stupid. Is there anything that isn't disposable anymore?

David