Wednesday, November 17, 2010
"That being the case, I think everybody who gets on a flight wants to ensure and be assured that everybody else around them has been properly screened and, oh, by the way, everybody else on that flight wants to make sure that I have been properly screened or you have been properly screened."
I don't actually think so highly of my safety that I feel the need to be sure that my fellow passengers are either molested or digitally strip-searched. We're talking about hurtling through the sky in an aluminum tube here. There's a lot that can go wrong, resulting in my fiery death, and terrorists are actually pretty low on the list.
You want me to feel safe in the tube? Show me the pilot's credentials and record. Show me the plane's maintenance records, and your procedures for screening and monitoring mechanics. I'm far more interested in whether the engines are going to fail than whether some fuckwit has a bomb in his panties. Passengers know enough now to watch for and try to stop the latter; we've got a fighting chance at all stages (though, more of a chance if we're not lulled into a false sense of security). The former, we can only watch the pretty sparks. But I guess terrorists make a bigger splash on the news than pilot error or mechanical failure, so that's what the TSA is worried about.
I'm not against taking reasonable precautions, but just remember there is no such thing as "safe". Just more or less likely to die. We can get the best pilot in the world in a brand new plane with no passengers at all, and it can still go down from a bird strike. All we can do is raise the bar and ask ourselves whether a given procedure makes us safer in proportion to the cost to our comfort, dignity, sanity, time, wallet, and honor.
That last bit, to me, is what this is really all about. Bickering about whether digital strip searches and grabbing peoples' crotches makes us "safer" or not is beside the point: This is a dishonorable way to try to make ourselves safer, full stop.
(Finally: Remember, folks, complaining about the TSA on Twitter and your blog does nothing. Worse than nothing if it satisfies your anger and prevents you taking further steps to make things right. I've written to my Congresscritters on this subject. Have you?)
Saturday, August 7, 2010
For me, it was true in a second way: that local repulsor model describes the control scheme I used for multi-robot formation control, based on Leonard's work at Princeton. (That is, the robots were all attracted to each other from a distance, but were repulsed when they got too close)
I am highly amused.
Friday, August 6, 2010
Saturday, July 31, 2010
So I broke down and bought the Livescribe Echo, which I mentioned on my other blog. I’m really impressed. It does a lot of things, right, particularly its choice of demo apps. It’s the first piece of technology I’ve owned in a long time that I just don’t know how it works.
Well, that’s a little bit of an overstatement. The pen has a camera pointing down the barrel of the ink cartridge, and when you press down, it looks at the pattern of dots and determines from that where, on what page, in what pre-saved notebook it’s looking at. It only records what’s written when it’s on: I tried writing a bit with the pen (with power off) then turning the power on and drawing a line through it, and it only recorded the line. I suspect from that, and looking at how it picks up the lighter strokes in my handwriting, that it’s not actually recording the sight of a line being laid down, but the position of the pen relative to the dots while pressure is on. (Which means that I really need to press down more firmly while writing!) But I don’t know what it is about the dot pattern that makes it recognize where it is so well.
As for usage:
I doubt I’m often going to use the recording feature: lectures and meetings, most often, so maybe once or twice a week, depending on whether my coworkers are leery of it. It would be nice to bring to Viable Paradise in the fall, but I’m not sure whether recording devices are allowed. I’ll ask at some point. (Did I mention I got in? I got in! It gives me hope that I might actually manage to publish something, and thereby become an author instead of merely a liar!)
I’ve also found that the handwriting recognition does a very poor job with my exceedingly poor handwriting. I can’t decide whether editing the results of OCR would be better than simply retyping what I write. So, I probably won’t be jotting down blog posts or anything.
However, even without those features, I really like the pen: I constantly lose notebooks (and pens, actually...) and the idea of having my writing backed up greatly appeals to me. I had been taking photos of my notebook and storing them in Evernote, but it’s a cumbersome process and I have issues with proper rotation.
I’m also intrigued by its SDK, the ability to create my own applications and my own paper. It would be pretty cool to be able to print up maps that it recognizes, and then plot out character movements from room to room, then go back and ask it where certain characters were at particular times.
Wednesday, July 28, 2010
For that matter, does anyone have any packing-lunch suggestions? I'd really like to be able to pack a week's worth in advance, as I tend to be only semi-functional in the morning...
Thursday, July 15, 2010
I’ve been thinking a lot about high-profile cases like the Mehserle trial. People are really upset over that one. I don’t claim to be an expert on it, having not watched the trial, nor even learned about it until the jury was in deliberation. But one thing that strikes me is that the folks who are angry don’t seem to really have trusted the jury.
Well, that leads me to ask: who do Americans trust? Celebrities. We need celebrity juries.
This has been done before, to great effect, in the investigation into the Challenger explosion. (Remember Richard Feynman with the O-rings?) People trusted that result, it worked. I don’t know if the members of that committee really were experts, or if it even mattered.
Here’s what I propose: Go through the list of people applying for Dancing with the Stars or similar reality shows. (That, or hang around the back lots of Hollywood studios looking for child actors rummaging the dumpsters.) Offer these people a hot meal and some amount of legal training, and then let them continue their publicity-hungry ways.
Then, when there’s a major trial, call them in. Use focus groups as part of jury selection: go out and pick up the angriest-looking people picketing, and anyone they ask for an autograph is in. Obviously, a lot of them will be disqualified for drug or drunk-driving offenses, but there should be a sizable pool left over of celebs who never got caught.
Things go on normally from there. Celebrities may not be the brightest people, but they’re certainly no dumber than your average person. And they have enormous egos -- they’re less likely to be awed by police officers or expert witnesses.
Then when the trial is over and the jury comes back from deliberations, you trot them all out in front of the cameras. They smile and wave, and deliver the verdict. It may very well be the same one that an ordinary jury would bring back, but people will trust it more if it comes from celebrities.
Now, I’m not only saying this because I think that people are stupid. The point of the jury system in the first place was that a person should be tried by their peers -- not only for their own sake, but so that the community would trust that the result was the same as if they had personally been there. It’s not merely or even primarily about fairness, but about confidence. I don’t think that we have that anymore, our cities and communities are just too big, and we don’t know each other. But we do feel like we know celebrities, probably more than we feel like we know our neighbors in many cases. Ergo, in order for the public to have confidence in the results of jury trials, those juries need to be filled with what passes for our neighbors today. Sad, but possibly necessary. Maybe we can get away with just having celebrities as jury foremen.
(I was going to insert a Paris Hilton joke here, but I’m reminded that she’s actually the only person in her generation of her family to *make* money rather than just spend it. The jury, if you’ll pardon the pun, is still out on her, I think)
Thursday, July 8, 2010
Just a friendly reminder that I’ve moved discussion of my writing over yonder. Recent posts include discussions of the outlining process, an experiment with ‘interviewing’ my characters, and some of the lessons I’ve learned from the Official CIA Manual of Trickery and Deception.
Wednesday, July 7, 2010
I was recently reading a blog that I follow, written by someone I respect. (I’ll refrain from naming him here, as his identity isn’t actually germane) I frequently do not read the comments on this blog: I don’t have a lot of time, and frankly, I go there to read what he has to say, not his commenters. (Dangerous to say, given that I’m curious what my readers think, but I tend to think of Internet comment threads as a pox on civilization)
This morning, though, the post was specifically about the response to an earlier post, so I skimmed the comments a bit. In them, someone asked a (I thought) relevant question about a somewhat unwise use of slang in a quote from an older post that was relevant to this one. The blogger responded, answering the question, but then taking the commenter to task, saying “Forgive my annoyance, but this was explained many times in the comments for that post. It's just a matter of scrolling down and reading the conversation, if you are confused.”
This struck me at first as patently unfair. In the post being referred to, there are pages of comments, more text than the original post. And frankly, I find that many comment threads put me off my feed: there’s a lot of stupidity out there on them thar interwebs. It’s usually safer not to read them. However, this is someone whose courtesy and thoughtfulness I have rarely had cause to question.
So now I’m wondering: what’s the etiquette here? To what extent is it reasonable to expect someone to read through a comment thread before asking a question? To what extent is it OK to chide someone for not doing so, versus simply not answering the question, or answering the question without further comment?
Tuesday, June 29, 2010
I seem to have accumulated a couple of interesting links, mostly about food, recently. Allow me to share them.
* The Ka-Pow bar! This is awesome. So, chocolate bars are made by grinding cacao beans, extracting cocoa butter, then remixing them in a different proportion, with added sugar, milk solids, vanilla, almonds, whatever. These geniuses substituted finely-ground COFFEE for the cocoa in the remix stage. I am in awe. But it’s been 80 degrees in the shade here, and there’s no way I’m going to have them shipped to me from OR right now. So, I will either wait, or track down a source of food-grade cocoa butter for my own nefarious purposes.
* Making perfect french fries. Basically it comes down to using acidulated water to strengthen the pectin in the potato, holding it together better during cooking. Also useful for potato salad, actually. They do a similar investigation with potato chips (you may call them “crisps” if you prefer being wrong)
You’d think that after reading all that about french fries and seeing electron microscope pictures of french fries and even seeing some dude *sand* a french fry, that I’d know how to make the perfect fries. Sadly, I don’t. But that’s OK, I need to be doing less deep-frying.
But it’s not all about food!
* Abandonware by An Owomoyela is an awesome spec-fic short story up on Fantasy Magazine’s site
* Speaking of fiction, Chuck Wendig, esq posted Part One of Codpiece Johnson and the Hamsters of Anamnesis, part of an ongoing saga of being careful what you say online.
* Hoist Sail for the Heliopause and Home, interactive fiction by Andrew Plotkin. I haven’t actually played more than a minute or two of this one, but it looks fascinating.
* NYTimes article on pot shops in Colorado. This subject is interesting to me: I don’t really have any interest in the drug itself, but I do think that a looser set of restrictions would do a lot of good for US society. The country’s various stabs at Prohibition have been uniformly bad for us, and this time our country’s drug habit is in the process of destroying Mexico. Finding another solution seems incumbent upon us.
* This is an older Times article on the various custom-order items available on the internet, including custom-tailored shirts. I’m really hoping to be able to buy shoes this way. I can walk into a shoe store, state my size, and be presented with at most two pairs of shoes that fit me. There are those reading this who are snorting at my broad spectrum of choice compared to the waste land that shoe stores are to them. Vans’ is among a number of stores that come close, but they don’t let you specify width! The “customization” is all about selecting color. I suspect that the answer is likely to be a machine that makes them on demand.
* On a related note, I keep trying to remind myself to visit the Harvard Book Store’s books on demand machine.
Monday, June 28, 2010
I admit that many of my food choice decisions are made on the basis of “Awesome” as opposed to “Good”. Friendly’s has a cheeseburger now where the bun is a pair of grilled cheese sandwiches. So awesome. I should have known better than to look up the nutrition information *sigh* (1500 calories, not including the side of fries)
On the other hand, if I skipped lunch and split it with someone else, it might actually be reasonable. Or, if I didn’t eat anything else all day except coffee and celery...
No, no. I must focus. The Double Down, by contrast, looks practically like a salad with its 540 calories!
Sunday, June 27, 2010
Now, though, I don't use much of it. A lot of statistics and some discrete math, but not much else. I'd like to keep in practice, but it's tough to find a place to get interesting puzzles to play at. Googling mostly turns up stuff aimed at kids: the "make math fun!" dreck that the Lament laments. I have all my old math textbooks (especially discrete math) Anyone have any suggestions? (feel free to pass along the link to this post to anyone who might)
Thursday, June 17, 2010
L. and I went to the farmer’s market this evening, and there was a stand selling seafood -- the proprietor drives down to the coast in the morning, then drives back with a haul of fresh cod, scallops, and lobsters. I bought a 1lb bag of scallops, which smelled absolutely delicious: sweet, really, and very faintly of the sea. I grilled them on skewers with just a brushing of canola oil and a sprinkling of salt. Fantastic. That plus a loaf of garlic bread, a bunch of carrots, and a jar of pickled beets, makes for a nice haul.
We’ve been impressed by how well the market is doing this year. The samosa stand in particular is just doing phenomenally well. Thanks to the warm weather, all the stands already have lettuce and other greens. Hell, we’re already a couple weeks into a very early strawberry season!
Looks look we’ll be eating very well this summer...
Wednesday, June 16, 2010
This chess set may be the most awesome thing you see all day. It’s pretty remarkable, but at the same time well within reach of a dedicated group of students armed with some good reference books. (... and $30k worth of legos and computers) It would not surprise me if they said that building the robots took more time than programming them.
The biggest surprise to me, actually, is that they were able to use Bluetooth to control the robots. My experience with Bluetooth in Lego Mindstorms was mostly negative: short range, few control channels. My guess is that they must have the robots listening on shared channels, and prefacing commands with some kind of ID string: that would also explain why so many of the movements are sequential.
Post-match battery charging must be a royal pain in the ass. I also wonder what kind of corrections are needed during a long match: Sensor error and actuator slip accumulate, so that over time, as a robot moves its internal position can get wildly out of sync with its actual position and orientation. With a more sophisticated sensor suite it’s not a trivial task. It must be very difficult with the equipment they appear to have.
Anyway: my hat is off to Team Hassenplug. That’s pretty damn cool.
Tuesday, June 8, 2010
I am somewhat in awe of the Mac transfer program: all of my programs and settings were transferred perfectly, even my ssh keys and volume settings. I’m not entirely happy that I had to sit and wait three hours to get to use it, but it’s like my old machine was transformed into a newer, faster one.
Sunday, June 6, 2010
I just ordered a new Mac laptop to replace one that is plainly on its way out. (It gave good service, but its time is nearly up)
I am told that it is easy to transfer my files and applications to another Mac, but having not done so I’m wondering if any of you have done so: did you have any problems? Was there anything you wished you’d done beforehand? What did you do with the old laptop?
I’d been getting increasingly frustrated with my (1st-gen) Kindle: half the time I’d pick it up, and it would be dead, frozen. I’d turn on the radio, and it would freeze. I’d written it off for a while, and then realized that all of these problems were at least loosely consistent with a dying battery. One $20 replacement later, and it works like a charm!
To celebrate, I finally picked up a book that’s been recommended to me a dozen times: H. Beam Piper’s “Little Fuzzy”. They were right to recommend it, it’s really good!
Saturday, June 5, 2010
Monday, May 31, 2010
Found a fun little semi-addictive game this afternoon, a variant on Mine Sweeper. (Linked to an English page at Play This Thing; the actual game is on a Japanese page, but is easy to play anyway) Instead of mines, you’re uncovering monsters of differing levels, and the objective is to not just find them, but to clear the board. There are monsters of level 1, 2, 3, and 4, and the numbers indicate the total levels of the surrounding monsters. You start off at level 1, and can click on level 1 monsters (blue blobby things) with impunity. Uncovering a monster above your level does damage to you (though you do get experience!) and so you have some ability to click around in the beginning.
If you have trouble (as I do) keeping track of your logical deductions, you can hover over covered squares and press the A and D keys to cycle through the different levels of potential monster. (They don’t bother with level 1, because you can always uncover those)
Friday, May 28, 2010
Sunday, May 23, 2010
Phew! I managed to cut “The Body and the Bomb” down from 9,000 words to 7,600. It’s not quite the 6,000 that had been recommended to me in the excellent advice from the reader at Strange Horizons, but I think I’ve improved the pace considerably. I’m a little less sanguine about how well clued the story is: I think a number of readers are going to guess the ending midway through. I’m gone through and removed some clues, muddied the waters a little, but it may still be too obvious. Still, I think it’s a solid enjoyable piece, more so now.
So I think that my plan of action will be to send this draft as-is to the next two electronic markets. If neither of them takes it, I’ll shelve it for a month and try a more thorough hack job later.
Besides, I don’t want to spend too much time on that now, because I’ve got a nice shiny new short story in the works. The tentatively titled “A Stab in the Dark” looks to run about 5,000 words, and is a lot more in line with traditional murder mysteries with alibis to break and weird clues to frame correctly. It’s got a good solid plot, and I think it will be a lot of fun to read. My original plan for it revolved around what turned out to be a really clumsy, flimsy clue -- but I kept picking at it, and managed to shove that bit to the side in favor of something much more iron-clade.
One of the things that I’ve really struggled with is how to make these actual science fiction mysteries: that is, I want the science fiction aspect to actually be important, not just “Sherlock Holmes in Space.” These should be stories that just can’t work in 1920s London or the modern day. Looking back, I think I’ve had the most success with those stories that would fall apart without the sci-fi element: space-borne telescopic arrays, robots as lethal instruments, pervasive sensor logs, home-built nuclear weapons, etc.
I don’t think it’s a coincidence that the stories that don’t really need that aspect (“Down Came a Blackbird” and “The Detective and the Detective”) are the ones that I’ve really stumbled on, the ones that have given me fits. But the ones that work, I’m pretty proud of. The writing is still pretty rough, and I’m sure I’m making plenty of amateur mistakes, but I’m happy with a lot of what I’ve written, it doesn’t make me nervous any more to show them to people.
Wednesday, May 19, 2010
Been finding lots of great stuff to read (or listen to) out on the interwebs lately, and just realized that I haven’t bothered to share a lot of it! Oops.
The first big find has been a trio of podcasts: Escape Pod, PodCastle, and PseudoPod. Each one is an audio short story podcast for science fiction, fantasy, and horror, respectively -- they’ve got good readers and good taste. (I know so, because I heard about them from someone who also has good taste)
Not enough? There’s plenty of good stuff online. All the links above will get you to some good fiction. If you’re looking for individual authors, have a look at Chuck Wendig (whose blog is also well worth reading), and Saladin Ahmed (I’ve been reading everything by him that I can get my hands on -- great stuff!)
And if that’s still not enough... I have some drafts that need editing? :)
Saturday, May 8, 2010
But articles like this one (repeating to some extent Stephen Hawking’s recent musing on the subject) that talk about hordes of aliens skipping from planet to planet stripping them of resources miss one important point: Invading an occupied planet for its resources is stupid, stupid, stupid. Here’s the thing: there’s almost nothing on this planet that cannot be found, in abundance, on the various moons, asteroids, comets, and other small bodies in our solar system. Water? All over the place. Hydrocarbons? Maybe, but there are smaller moons that look pretty good for that, at least for the simple ones . Gold, iron, platinum, uranium, tritium: there for the taking.
What do those sources not have? Angry defenders (to be brushed away like gnats, natch) and a big honking gravity well (Link found quickly and easily via DCKX!). Yup, everything stolen from us pathetic Earthlings has to be hauled uphill at considerable cost. Whereas to drive off aliens stealing our asteroids, we would have to develop ships to go attack them. Even if you were a total badass, if you’re that interested in resources, you’ve probably got accountants who will tell you the right answer.
So, the moral of the story is, if aliens ever rain death upon us, it won’t be so they can steal our gold, it will be because they simply want to kill us. Possibly because they watched Glen Beck.
Now, there do remain potential invasion scenarios which do pop up from time to time. First possibility: capture a whole lot of semi-intelligent hominids to use as slaves, mining marshmallow peeps on Regulus V. (What? Where do YOU think they come from?)  Second possibility: Settlement, either permanent or temporary. (Both of those say a lot about the conquering aliens, in a “why do you have this technology, but not this other one?” way that leads to interesting books) Now, Prof. Hawking could argue that these are highly likely to be the case, but I’m not nearly so sure.
 This leads to the hilarious-to-me situation where “Aliens invade Earth for its oil” is actually the most plausible of the “resource stealing” scenarios... and is the one that nobody will touch because it’s just too corny and heavy-handed. Gotta love that.
 My personal theory is that Earth is a secret sweatshop for an alien race that really, really likes second-hand plastic.
Friday, May 7, 2010
Wednesday, May 5, 2010
I am not alone. The main finding, to my mind, is,
Nobody can spell “fuchsia”.
So, I now feel vindicated in sticking to basic colors! Unfortunately, this does nothing for the accusation that I have no taste.
Sunday, May 2, 2010
I altered the linked recipe by cubing the pork and then browning all the cubes. It took a lot longer this way, but was totally worth it. I also stirred in some crushed garlic, and monkeyed with the dry rub recipe. It smells really, really good.
Monday, April 26, 2010
* TWIFComp. Interactive fiction in 140 characters or less (not including whitespace) I thought this one was kind of dumb, actually, but after some poking around, it looks as though people with more poetry in their souls than I have (or at least more brevity) have come up with some interesting things.
* A duet for the ages. I will not apologize for sharing this link.
* AutoDesk Sketchbook Mobile. I’ve tried a couple drawing programs for the iPod Touch and mostly found them atrocious. This one intrigues me, but I keep going back and forth over whether I’d use it often enough to be worth the price.
* Paying taxes with art. For the past fifty years or so, Mexican artists of the painterly persuasion have been able to pay their taxes with art! In the process, the Mexican government has amassed a remarkable collection. This intrigues me -- what if Joss Whedon got to pay his taxes this way? (H/T Marginal Revolution)
* Article on statistical significance. I have not yet fully digested this one. On the one hand, I agree that many scientists have a poor grasp of statistics. Human beings in general do poorly with the subject. I have complained for a long time, too, that medical studies I’ve read often have very small and too-heterogeneous sample sizes. I’m not entirely sure what Siegfried’s conclusions are here, thanks mostly to his somewhat aggressive wording. I think that this is best viewed as an argument for earlier and more comprehensive education in statistics. (H/T and further discussion, MR)
* Scotch as a form of vicarious travel. Fun little post about various whiskies, touring Scotland by way of its liquids. Posting it here so I don’t lose it. Summary:
Scotch is an acquired taste, but can be as fun to drink and parse as wine, if you’re so inclined. To me, it’s like a drinking in a sense of place, and since that place is Scotland, you know I dig it. It’s about downing the distilled essence of a landscape, tasting the waters and grains and peats of a far-away land; about turning Scotland into smoke and fluid and taking it in through your nose and mouth.
(H/T Chuck Wendig)
* Jay Lake eats a Double Down. I have to admit, this “sandwich” has become something of an odd obsession for me. Basically, it is a couple pieces of bacon and cheese sandwiched between two pieces of fried chicken. Here, Mr. Lake eats one so that the rest of us don’t have to (want to?). It effectively demonstrates the difference between “good” and “awesome”. (H/T John Scalzi)
* Jane McGonigal’s TED talk on games as art. Prompted Ebert’s most recent ill-informed spew on the subject, which I have no interest in linking to. Interesting talk, though not the best I’ve seen -- honestly, I get a little tired of the “protests too much” aspect of the “games as art” argument. This is pretty much going to be one that will only be won when the people who didn’t grow up playing video games have all died of old age. Until then, lots of handwringing and frustration.
* Installing Linux on a dead badger. I could probably explain this one, but I won’t.
Saturday, April 24, 2010
What really stuck in my head, though, was the lengths to which the commenter had to go to persuade her friend that no, the story really was fake. The story itself is hard to believe... but if you’re ready and willing to think the worst of people, it’s not too hard to accept, especially when the story is stripped of context. Now, I like this particular exchange because, having seen many others like it, this one is sadly the most surprising because of how impressively civil these two people are to each other, even though they must have each found the other person maddening.
Until and unless I am proven otherwise, my stance is that the original source of that piece was either removed or corrupted due to the sentivity/controversy it could generate.
I do not know what is "legitimate" on the internet when every Dick, Tom and Harry can use the internet to publish "news."
While that last sentence is a valuable insight, it seems to me that this person makes exactly the wrong conclusions from it. Their statement appears to me to boil down to, “I will believe anything I read that confirms my personal biases.” Which does not make this person much different from most other people on the internet, just more up-front about it!
I will do my best to think about this next time I see something on the interwebs that outrages me -- do me a favor and try to do so too?
Friday, April 16, 2010
Death in a Tin Can -- submitted to Viable Paradise (workshop)
The Body and the Bomb -- submitted to Strange Horizons
Where Do They Bury the Survivors? -- finished, shopped around, waiting in the desk drawer for either a revision, a market that will take novella-length works from a new writer, or having sold enough other pieces to no longer be as much of a risk
Down Came a Blackbird (critter’d as Proud, working title was Dead Drunk) -- being refactored, lengthened
Midnight Train -- partial draft, somewhat stalled
The Detective and the Detective -- Notes/Outline stage
So! Six works in various stages of finish, a third of them currently on submission. Not great, but not terrible.
Thursday, April 15, 2010
Tuesday, April 13, 2010
Update: Oh, hey, if you’re looking for something to do with that big fat refund check, here’s a fun place to spend some cash. Documentary about Japanese entomology? You know you want that.
Sunday, April 11, 2010
I also finally got off my butt and submitted my short story The Body and the Bomb to Strange Horizons. I really hope they take it, because of all the markets I’ve looked at, that one has just blown me away. They publish quality fiction and articles (all free to read, though assuredly not free to produce, hint hint), and their submission procedure is amazing: they’re responsive, they give you a tracking page, they just give you information so that you’re not (just) sitting at home in the dark biting your nails. They’ve published a lot of good authors, who seem to be fiercely loyal. Oh, and they pay professional rates, which means that publishing through them makes an author eligible to join SFWA, which is one of my goals for this year.
Saturday, April 10, 2010
Am I the only one who remembers just how horrible American cell phones were just a few years ago? Every single one was a locked-down walled-garden captive market with “custom” (read: awful) operating systems from each vendor that crippled even the best hardware. They charged for damn near everything, from ring tones to moving photos around, and removed basic functionality because it didn’t fit their “vision”. Whatever the flaws of the iPhone and the 3G iPad, it’s a hell of a lot better than what we had before, especially as basically a transitional device. From the point of view of the cell phone companies, the iPhone has been a Trojan Horse, and I think it’s no accident that it was picked up by one of the weakest competitors in that field. AT&T was the defector from a common strategy, remember, when they basically broke ranks and agreed to sell a phone over which someone else had control over the software, particularly the Apps.
I have many issues with the App store. But I have never in my life seen such a thriving market for small-scale indie software. There has been a gaping hole in that part of the software market for many years, based on an unwillingness or inability to pay for small pieces of software that do one limited thing well. Up until recently, the only viable paths for these kinds of software have been shareware, freeware, and open source for distributed software, and pretty much only ad-supported software for online services. These are miserable options and have helped stunt the growth of the software industry. The App Store is a worthwhile experiment in small-scale software sales. It’s been done better on a limited scale, but by and large I don’t think it’s terrible.
That said, their review process as-implemented is something that I can’t entirely get behind. Wanting to insure that Apps don’t open security holes? Awesome. Filtering out useful apps just to protect AT&T’s benighted business practices? Obnoxious, but for the moment a semi-reasonable bargain. Going through and cutting out swathes of ‘adult’ apps? Ridiculous and obnoxious. But these flaws are fixable, and market pressure from the Droid contingent may well push Apple to solve them. What’s not as fixable is their ability to squash whole sub-markets by picking winners from an immature field and/or pushing their own solution -- that will require either self-control or Justice Department action. So, this is not a perfect approach by any stretch, but that imperfection can be addressed by competition.
See, I think that the key insight here is that Apple is not so much selling iPads and iPhones so much they are selling convenient and portable access to Apps. The whole philosophy behind this hardware is to be as transparent as possible: it’s supposed to be practically a physical manifestation of software, a blank slate that turns into any other handheld device. They’ve made it possible for someone to sell me a $2 scientific calculator, that turns into someone else’s $1 pocket video game, and someone else’s free e-reader. It’s not intended to replace your laptop (as will be obvious when they unveil their next expensive MacBook lineup) it’s intended to replace the dozens of other (mostly unmoddable!) devices that you already have or would like to have.  Here, then, his complaint seems to be that there ought not be a gatekeeper.
But there are significant benefits to having a gatekeeper that Doctorow doesn’t acknowledge. He complains that Apple treats its users like morons. But the majority of the computer-using population has demonstrated incompetence when it comes to computer security, and they’ve been aided and abetted by operating systems that let them install whatever the hell they want. Do you have any idea just how many botnets there are out there? How many people fall for stupid tricks like popping up an official-looking window asking permission to infect their machines with all manner of nasties?
This all isn’t the fault of the users, necessarily. I’ve heard many explanations ranging from confusing user interfaces to a simple lack of education about how computers work, but relatively few proposals for how to address this issue -- many of them basically requiring a step where “All our users suddenly get savvier”. But over the last 25 years of having open platforms (free compilers, free and open source operating systems, standardized hardware that’s easy to mix and match) the average user had gotten continually dumber and less savvy. It may be insulting to assume that most of your consumer base is stupid, but it’s not wrong. And having a device that’s easier to use means attracting users who are too dumb to use even a PC. Apple’s approach is a solution to this problem, with a simplified interface and strictly-controlled installation path. Not the best solution for everyone, but hey: the iPad wasn’t really made for people for whom those aren’t problems. For those who do suffer from those problems, the iPad ain’t bad. I could give my parents or grandparents or teenage cousins an iPad without scaring the hell out of them about being very very careful of what they download and run. (I mean, I won’t, because I’m a cheapskate. But I could.) They’ll fall prey to viruses and worms sooner or later, but with these safeguards they’re going to be more resilient than your average Windows desktop. You say “Apple’s insulting its customers by calling them stupid”, I say “Apple’s talking responsibility for many of its customers being stupid”. Same thing?
Now, that could be seen as rationale for making the device difficult to tinker with, but I reject that notion that it is difficult to tinker with. Software tinkering is much more interesting for this device than hardware tinkering, given that software is the whole point of the thing. I’ve downloaded -- for free -- their SDKs and looked through the many, many tutorials and documentation files they’ve made available, also for free. I hate Objective C with the heat of, oh, a half-dozen suns, but it’s not an intentional hurdle: those poor fucks at Apple use it too. OK, they make you pay like $100 to upload anything to your own device, which I find mildly irritating, but this is not a major or unexpected hurdle. Dev kits for embedded devices like PICs or FPGAs can cost much more than that, and I’ve rarely heard people say that those things were killing innovation. Compilers for a number of systems have frequently been sold rather than given away free, too. Besides, if cost is such a hurdle, jailbreaking works just as well, and judging from Apple’s reactions I doubt they’re really that unhappy about the jailbreakers. So, yeah: if you want to fiddle with your iPad, you have to shell out some dough and/or jump through some hoops. Are those hoops really more onerous than learning to program in the first place? I doubt it.
Having discussed the software, let’s address his complaint that being unable to open it means that you don’t really own it... That’s true to an extent, but it is a gross oversimplification. For one thing, you can open it. I can buy a $10 plastic piece to pop open my iPod Touch, and I nearly did when I broke the screen. Instructions abound for explaining exactly how to disassemble an reassemble it. The trouble is, screws or no screws, the kind of electronic fabrication required to build this device means that modding this thing is going to be extremely difficult no matter what. It’s neither easier nor more difficult to open up and modify than most other similarly-scaled consumer devices -- it’s just a more attractive target. My Nintendo DS isn’t very easy to mod either, but nary a peep about that. Sure, Apple could have added solder points, a better peripheral port, maybe put out hardware documentation, but I can’t exactly pretend to be surprised that they didn’t, considering that the second version of its device was (still is?) delayed while waiting for FCC approval.  I’m much less happy about relying on them to swap out batteries, or being unable to change hard drives or add more RAM, but these things are somewhat trivial, amounting to wanting the same thing Apple sold you, only numerically better. The only thing I’m really unhappy about is the Bluetooth lockdown, particularly the inability to add a keyboard.
So, Doctorow is sorta, kinda right as far as that goes: if your interest in a platform have more to do with hacking the hardware than hacking the software, it’s not designed for you. No cell phone type device is, until the FCC decides it’s ok to take a few (reasonable?) risks. But it is friendly to most of its intended audience, it’s not far outside mainstream practices (I’d say that it’s a damn sight more generous than the standard practices of the cell phone industry!), and Doctorow’s unwillingness to acknowledge that fact, or to acknowledge that Apple has helped made the cell phone a hell of a lot more consumer/developer friendly, makes it very hard to take him seriously.
As for the remaining point (his digression on journalism is orthogonal) about ownership of the digital stuff you buy... I am sympathetic, but skeptical. This is a complicated question that society basically has not yet answered, and while I may admire Doctorow for being willing to give away so freely the fruits of his mental labor, I think he frequently goes way too far in expecting others to basically do the same. The end result of his mentality here is, in my opinion, likely to be an environment where creative programming can only be a hobby to many people rather than a means of support. When he says at the end, “If you want to write code for a platform where the only thing that determines whether you're going to succeed with it is whether your audience loves it, the iPad isn't for you.” He might as well say “this society isn’t for you” or “this species isn’t for you.” There are always externalities and tradeoffs: ability to distribute, ability to get the word out, ability to stay within local laws, ability to get your adoring audience to pay you instead of grabbing your code from a warez site, &c. And notice that he doesn’t say “if you want to be paid to write code for a platform...” I question the notion that making that cash selling through Apple’s App Store is that much more onerous for the majority of developers than making that cash dealing with advertising companies or PayPal. I also question the proposition that the difficulties to the remaining developers outweigh Apple’s interest in maintaining the security and ease of use of the device. If he’d like to make an argument in favor of another platform like Android or describe a hypothetical ideal, I’m all ears. If he wants to make an argument in favor of government regulation of walled-garden markets, again, I’m all ears. But he hasn’t attempted to make those arguments, so far as I know.
(Oh, a word on Flash: I hate Flash. Fuck Flash. The people who use Flash now on their sites were the people who used to love blink tags. The notion that Apple’s refusal to support it may mean fewer Flash-based websites and talking ads? Fills me with glee. That will go away when HTML 5 becomes the “let’s annoy the piss out off John!” method of choice, but I’ll take what I can get for now.)
(Also: it bothers me that Apple’s spell-checker flags “ain’t” as a misspelling. Fuck you, Jobs, and the prescriptivist linguistics you rode in on!)
 His point that gadgets come and go works against him here, in my opinion: if I desperately need a calculator that handles trig functions exactly once and then never again, owning an iPhone or iPad makes it much easier, more environmentally friendly, and less costly to acquire and then discard it. And how many pedometers are Americans going to buy before finally admitting that we’ll never really like them? Should they go into the landfill, or into the bit bucket? But even if Apple completely drops the iPad and iPhone, I’ll bet dollars to donuts  that a slew of emulators will pop up and allow us to keep on using the ones we’ve already bought.
 The FCC issue is a big one that he fails to acknowledge, and I think it significantly weakens his argument. In order to keep costs down, the regular iPad needs to be very similar to the 3G one (not to mention the iPhone), and we’re just not going to see such a device being friendly to hardware hackers.
 especially if those donuts cost more than a dollar each
Wednesday, April 7, 2010
Now, there are plenty of games out there where user choice directs the way the game goes: The Sims, Dwarf Fortress, Spore... sandbox games that don’t have much of a designer-driven narrative. Now, both of those games can have pretty vivid player-driven narratives, but those are unreliable, and not at all guaranteed. Besides, the random number generator can be a lousy storyteller: sometimes you look up at the sky and the clouds just look like clouds.
Many games that are considered to have told strong stories have been very rigid, railroading the player from one plot element to the next, often showing off a series of pre-generated events. The result is a cohesive story that everyone who plays the game can play through, see more or less the whole thing, and then compare notes at the coffee machine. Now, not all linear games are purely linear: like extras on the DVDs, it’s been common to make it possible to get elaborations on the main story by doing additional tasks -- Final Fantasy VI was great for this, with some deeply hidden bits of characterization, and even a willingness to leave story elements unresolved if the player didn’t bother looking for a character in the second half of the game.
But... linear storytelling is such a crazy rigid thing for such a mutable medium. Historically, stories have been changeable: campfire stories, fairy tales, epic poems -- all these things were traditionally tailored to the audiences, which is why Beowulf and the Iliad were so full of carefully dropped names. Everyone likes different things in a story, and when you’re writing a book you can be as narrow as you want, targeting your vegan amputee bulimic skier demographic with laser-like precision. Because hey, with your artistic purity and a case of ramen, you can at least live long enough to get pellagra. But games can cost serious money to develop, and it is frequently desirable to have a multi-digit audience. The larger the budget, the broader the appeal needs to be. You can either achieve that by making the story as bland as possible, filled with violence and T&A, or you can embed many different potential narratives into one game, customizing the game to the personality and interests of the player.
There have been a number of different approaches to that. One example is basically the switched-track railroad: the designers embed multiple plots into one game, and you basically pick the plot you’re watching by throwing switches. These limited, discrete choices can make for very different games, or (more often) basically one story with a bunch of different endings, but are time-consuming to write. As a result, there have tended to be fewer paths, such that a given player can revert back to a few strategic save points and see all of the endings, a practice known as completionism. Sometimes these folks are derided as playing a game to death, but it seems to me that usually they are cherished: they’re rewarded with super-difficult endings that are unlikely to arise through ordinary gameplay, and are often directly marketed to on the box. (“Over 15 nearly identical endings!”)
A lot of the different approaches come from the tabletop gaming realm, and the kind of game you play doesn’t depend entirely on the plot(s). The first big one is character customization: you spend a lot of time with the PC, and getting to customize a bit can help a lot. Even just picking a party of stock characters, some players will give those characters strong personalities. Quest for Glory did this particularly well, I thought, offering multiple paths through the game for the initial three basic character classes.  Sometimes all it takes to make a story more interesting to someone is to better match the player... but the big pitfall here is that this little bit of generosity might say a little too much about who you think your audience is (or rather isn’t).
Part of selecting the protagonist can also be modifying abilities, usually derived from the Dungeons and Dragons group: Strength, Dexterity, Constitution, Wisdom, Intelligence, and Charisma. (Sometimes also Luck) By choosing a character with, say, high Dex/Int and low Str/Con (See? I speak the lingo!) you have to solve problems very differently than the other way around, leading to a different type of game. Or failure. And if you screw up or just find your play style changing, there are usually ways to alter those stats (and thereby the nature of the character) during gameplay, by spending time leveling up or looking for special items.
Now, that’s basically just selecting the protagonist, but a number of games (and those of Peter Molynieux come to mind first) customize that protagonist at a purportedly deeper level over the course of the game according to the choices that the player makes, usually to place the character on an axis between two extremes. The simplest, and to my mind most common axis is good vs. evil. Do a good thing, you get nudged in the ‘Good’ direction. Do a bad thing, you get nudged in the ‘Evil’ direction. The trouble is, these games usually betray a somewhat simplistic conception of evil, and frequently are very bad at looking at the big picture. Also, possibly because some game designers are apparently uncomfortable with players being ‘evil’ just because they’re bastards, it seems sometimes that they go out of their way to make it so that the choice is more a practical matter between ‘good’ and ‘convenient’ or more generally “Do you use your powers for good or for awesome?” with only the notion of ’this is wrong and no-one should do it ever!’ to prevent the more objectively useful option ... which is itself an interesting statement about the source of morality, but in practice I usually find it pretty lame, especially in games where as the protagonist you had to hack/slash/shoot your way through a bunch of one-dimensional enemies just to get to that moral choice. Killing a whole lot of people and taking their stuff is overlooked, but lying or cheating at cards will get you shunned? That’s... pretty historically accurate, actually. Moving on.
The basis behind these sort of choices is the notion that there is no right answer, that there’s always a tradeoff. This can work much better in games where the choice isn’t between good and bad, per se, but about how the player views the character. Arcanum is my favorite example of this. In that (fantasy steampunk RPG) game, the choices are generally between technology and magic: consistently favoring, say, technology, rewards the player for a sort of purity of vision with awesome things like steampunk robots, but allows the door to magic use to be slammed firmly shut. At any given point, it might be advantageous to go against type, but it would run counter to long-term goals and the vision of the character. I was fortunate to attend a talk given by the fine folks at Oblivion about choice in their upcoming game Alpha Protocol, and one thing that they mentioned (let slip?) is that while all the choices can stand alone, they were designed around a handful of familiar archetypes (the suave James Bond type, the intense Jack Bauer type, or the pragmatic Jason Bourne type. (Huh. Check out those initials.)) and I suspect that the plot of the game will show a bit more thematic consistency if the player sticks to one of those types.
Then there are the approaches that basically ask the player up front what kind of game they want, or code it: picking one’s favorite color in Moonmist, for example, or that tarot reading at the start of Ogre Battle. I’m not sure I’ve seen that done well, honestly.
There is another tactic that’s being used increasingly, of giving the computer a more deliberate role of gamemaster, with the tools to gauge how the game is being played and adjust accordingly. The ‘director’ in Left 4 Dead is a good example of this: it can tell generally whether the players are being cautious or reckless, for example, and throw different kinds of zombies at them to make the game more challenging, or potentially adjust the difficulty to a team that’s doing particularly poorly or very well.
So, what’s to be done? Where’s this going?
The funny thing is, games are already all about choices and style. In a first-person shooter, does this player do a lot of exploring, or get straight to the point? Does this player kill everything that moves, or spare fleeing enemies? In an RPG, does this player walk around in mis-matched armor with a weapon at the ready, even in town? Does this player totally blow off the main plot in favor of looking for treasure or searching out weaker enemies to kill just for the experience of having done so?  In an IF game, is the player a total kleptomaniac, picking up anything at all regardless of obvious consequence, or are objects only picked up when there is an obvious use for them?  Games frequently offer optional ways to be in-character, like closing doors or turning off lights when leaving a room, or otherwise cleaning up after oneself (I’ll refrain here from overtly spoiling the Last Lousy Point in a very good game by Admiral Jota).
I’m going to go out on a limb here: I think that there is little point in setting up deliberate, discrete choices if the game ignores all the common choices already being made by the player about how to play the game. And I think I see a trend in games to include those choices more actively in the narrative. Now, am I eventually going to be playing a game that basically embeds an MBTI-style personality test to determine just what story I’ll personally find the most satisfying? That would not surprise me. But it would, I think, likely be a disappointment. I think that the next game that really wows people is going to be a bit of a cross between Left 4 Dead and Fallout 3: a sandbox-type game with a range of morally ambiguous paths with multiple axes of personality traits emerging (not just good vs. evil, but pacifist vs. violent, talkative vs. quiet, packrat vs. traveling light, etc) , and a ‘director’ that analyzes style of play and adjusts the plot accordingly. Such a game could do that without ever presenting the player with an explicit choice. (Also, five bucks says that the protagonist in the first such game to do this well is mute, like Chrono)
Anyway, I’ve been sitting on this post and tinkering for the better part of a week. Time to cut the beast loose and let you all kill it.
 OK, rant time. I played through those games when I was a kid, and managed to turn a rogue character of mine into a paladin in the second game, without hints or anything. I kept that character through the fourth game, and then held onto that file on a floppy disk for freaking YEARS waiting for QfG 5 to come out. When it finally did and I finally got a copy, I found a floppy drive, went through those old disks, and found them all succumbed to bit-rot. I was So. Pissed. Off. I almost didn’t play QfG 5 as a result.
 I’ve made the point before, but it’s worth repeating: The practice of ‘grinding’ is an inherently evil, vicious act. Imagine hearing a third-party report of a typical ten minutes spent grinding: “Yeah, this guy decided he needed killing practice or money or something, so he went out where he knew he’d be attacked, easily finished off the poor bastards he came across (even when he surprised them!) then collected their belongings and sold them in town.” Even killing horrible monsters is morally ambiguous if the player knows that there is an unending supply, particularly if the plot involves finding another way to “clear the land of taint”. That doesn’t mean that a game cannot or should not have these elements, only that they should be treated with a bit more... sophistication?
 I do *not* mean the frequently-obnoxious tactic of preventing the player from picking something up before it’s ‘ripe’, or the practice of making every object either mobile and useful or non-mobile and not useful. Nor am I talking about games (*cough* Hitchhiker’s Guide *cough*) that punish the player for not having grabbed something non-obvious earlier. I’m purely talking about reasonable self-restraint here.
Friday, April 2, 2010
Wednesday, March 31, 2010
I look at it in terms of signal theory. Art is the signal, craft is the channel. The artist has an idea, a concept, a feeling, a signal that needs to get through to the audience. Whatever the artist is trying to say has to get somehow from the artist’s brain to the viewer/reader/player/user. When someone says, “is it art?” they’re usually trying to receive that signal and judge its effect on them. If the signal comes through clearly, it can still be shot down as “not art” based on the receiver’s judgment. If it doesn’t come through clearly, the receiver may have to work to understand in a way the artist doesn’t intend, like trying to decipher a child’s drawing.
The question, then, is how effectively that signal can be transmitted. That’s where craft comes in: an artist with craft is throwing bits into the void. Maybe the signal gets through, maybe not. Craft, skill, gives the artist a clearer channel. There have been a lot of times when writing where I sit there with a scene in my head, and I just can’t describe it. That’s a craft issue if it really is that clear in my head... but it’s so easy for a craftsman to blame his tools, right? And which would a writer rather admit: lacking talent or lacking vision? (I really don’t know, I lack both)
We need to talk about bandwidth, too. Not all channels can support the same throughput of information: try to cram too much meaning into flash fiction, and it’s likely to get garbled for all but the clearest channels. (Hemingway could write a novel in six words. Most writers can’t) At a given level of skill, there’s a limit to what can be said. The high-frequency signal, the fiddly little details of your vision, are lost the easiest. Now, looking at the subject from this point of view, we can talk about that book/chair comment (“I don’t think a chair is going to save someone’s life in the way a writer can.”) in a different way: a chair just has less bandwidth than a book. There’s a limit to what can be said through even the most well-crafted chair. Rodin might be able to get across a fundamental human truth in a chair (especially if it’s allowed to be a rocking chair). Me, I’d be lucky if I could get across the rough notion “you can sit on this and not die.”
The receiver, too, has a role in this dance. In some ways, just as much as the transmitter. The best transmitter on the clearest day won’t do too well with a rusted-out rabbit-ear antenna sitting in a concrete bunker. To an extent, you can make up for a tinny transmitter by having a really good receiver. In the same way that NASA uses enormous powered radio antennae to pick up the very faint signals from faraway spacecraft, well trained lookers-at-art can discern meaning where others cannot. Ever sat dumbfounded while a parent proudly shows off a crayoned monstrosity, enthusiastically pointing out one purple blob as the dog, a green stick as a grandmother, and a remarkable representation of a 1973 Pontiac as a juice stain? Or a wine taster prattle on about notes of peaches, smokiness, and bitter almond. The craft isn’t there, but a good reader can still pick up the idea being transmitted. This can work against a writer: if you always show your drafts to the same person, you risk that person knowing what you mean rather than what you say, overlooking flaws in the craft. It’s like, hypothetically speaking, picking up a cell signal with a freaking satellite dish, then claiming, “Oh sure, you get great reception out there in west-central New Hampshire, you won’t have any problems if your car breaks down by the reservoir” like those miserable fuckards at Sprint must have done.
So, your artistic vision isn’t getting across -- what does signal theory offer by way of an answer in terms of your art and your craft? Plenty, I think.
The obvious message is to strive for the clearest channel you can get: perfect your damn craft. Even the simplest message can get garbled. Like Humpty Dumpty, when you say a word, it should mean exactly what you want it to mean.
The next thing is to try to have a sense of what your bandwidth is. Flash fiction, short stories, photographs: There’s a limit to what even the best craft can effectively put into those forms, and if you try to encode too much you'll overwhelm the receiver and your message will be garbled by its own sheer weight. I have great affection for Tolkien’s work, but I think he struck a better balance in The Hobbit than in the Lord of the Rings in terms of how much world he crammed into those pages. The basics got through, but some signal was definitely lost on this reader. Me, I don’t have Tolkien’s skill, and if I try that I’ll just drive readers away. The Nyquist-Shannon theorem gives a fundamental limit to the amount of information you can send in a particular channel. It turns out, there’s a remarkable similar theory for literature in terms of words per minute, but it won’t fit in the margins of this blog.
The other thing you can do is to crank up the transmitting power: hit the audience’s emotional triggers. Blam! down goes Bambi’s mother and all of a sudden you weepy bastards give a damn about deer for the first time in your lives. It’s a manipulative trick, and people can resent it, but hey, whatever works.
Or, conserve bandwidth with a simpler message: drop the subtleties and go for a coarser, clearer artistic statement, painted with simpler, bolder strokes. This is frequently needed when going from a high-bandwidth channel to a lower one: making a movie from a television series, for example (why yes, I did recently read an old review of Serenity, why do you ask?). This is often derided as dumbing-down (particularly when Hollywood does it), but when you really know and understand the basic artistic motivation, this can instead be a refreshing clarification, stripped of unnecessary clutter: think about the Renaissance paintings of classical stories and myths, for example, or paintings of Shakespeare’s plays. A thoughtful condensation can have the effect of amplifying the important bits and making them clearer to philistines like myself. As Antoine de Saint-Exupery said, “Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away."
Alternately, if you’ve got a really important point you can’t bear to leave off, you can use a kind of literary Gray code: use repetition and your knowledge of the reader to make sure that a potentially garbled message can be correctly interpreted on the far side. Basically, trade bandwidth for a guarantee of delivery. Teachers use this a lot to drill through dense skulls, by saying the same things multiple ways, or stopping periodically to ask, “did you get this?” Ayn Rand was terrible for this: for her longer books especially, it seemed that she was so afraid that the reader wouldn’t get the point that she would eventually just break the narrative and beat the reader over the head with it.
Anyway, those are my thoughts on the subject, and probably some terrible advice. Hack away.
Friday, March 26, 2010
So think about that, teachers and professors, next time you lift your poison pen to scratch out a test paper! J’accuse!
(H/T to Doc Grasshopper)
Thursday, March 25, 2010
Monday, March 22, 2010
Making french fries starting with cold oil. This bothers me; it seems like the resulting fries will just drip oil, but the claim is made that exactly the opposite is true. I will have to try it at some point, preferably on a day that the garbage goes out.
Savory oatmeal. OK, I get the theory, it’s basically porridge, jook, or the basis for white pudding (think black pudding without the blood). And I like a good savory breakfast (see down the list) but this just seems weird to me.
Salted coffee. I actually picked up on this one a while ago, and can attest that it works well -- but it still just seems wrong, especially because I think salted tea is disgusting.
Hydrating nuts. Certainly hydrating beans makes sense to me. And I’ve even joked about attempting to make baked beans with peanuts instead of the more usual legumes. But this is intriguing and weird and I’m having a little trouble with the idea.
Vinegared drinks. I touched on this earlier when I talked about gastriques, and mentioned that some people were putting them in cocktails. I’m not sure I can bring myself to drink the Vinegar Cocktail in that link... but it does sound awfully interesting, and like Kate in the comments I have also heard of adding pickle juice to drinks, even to beer. Speaking of beer...
Beer for breakfast. All right, this has nothing to do with what I’ve been reading, and everything to do with going to have a good Irish breakfast last week on the 17th. But several people remarked on how well Guinness goes down in the morning, and I was surprised to agree! Ancestry does tell, I suppose...
Sunday, March 21, 2010
OK, he wasn’t really the first person to notice it. Aristotle (in Meteorology, Book 1, near the end of Part 12) noticed it... but then, Aristotle was not much of an experimentalist: he used to claim that women had fewer teeth than men, for example. Despite having had three wives, it never occurred to him to, y’know, count. (On the other hand, if Aristotle asked to count my teeth, I’d say no.)
I was struck, in reading the two papers I’m about to describe, both by how well-crafted they are and how very different they are from each other. They complement each other in important ways, and I highly recommend reading (or at least skimming) both, as they are both freely available through Cornell’s arXiv, which is generally my go-to place for Interesting Science Crud and an excellent resource for would-be science-fiction writers.
The first paper (actually, the second one I read, but you should read it first) is an excellent historical overview of the problem, by Monwhea Jeng. It does a very able job of answering the question, “What is this effect, why is it interesting, and why should it be studied?” as well as some possibilities that others have raised for explanations. Bottom line: this problem is real, weird, and worth pursuing. (Jeng is a very able writer, too, and paints a much more interesting portrait of Mpemba’s experience than does Wikipedia)
I did take issue with one statement, though:
What is interesting about the Mpemba effect is that unlike the examples commonly given in science text- books, where theory and experiment march hand-in- hand, always leading to further progress, here theory (rightly or wrongly) prevents acceptance of experiment.
Now, this paragraph (which I have admittedly scrubbed of context) is more subtle than it looks: my first reaction was to shake my head and say that Jeng has it exactly backwards, as Gödel, Darwin, or many others would attest. But the bit about the science textbooks gives me pause here. I remember my textbooks as playing up the scientist-as-lone-hero aspect. Has that changed? Am I misremembering?
The second paper I read this morning, by James Brownridge of SUNY Binghamton, is an attempt to bring together all the possible experimental conditions that give rise to this apparent effect. This one is a very thorough experimental paper that comes to an explanation that I find very convincing. (Hint: the definition of the Mpemba effect is that, under the same experimental conditions, a quantity of hot liquid freezes faster than the same quantity of otherwise identical cold liquid. Ask yourself what it really means to have the same experimental conditions) Brownridge goes to great lengths to examine the problem from many different aspects, but also maintains something of a narrative: this was done this because of this, and then this and that as a result.
The contrast between the two papers is, to me at least, striking. I was tempted at the outset to put more value in the Brownridge paper with its detailed experiments, charts, and explanations. But I was corrected by none other than Brownridge, who holds up Jeng’s paper in the first paragraph as helpful and useful.
So, I backpedaled, and thought about it. There are two important pieces here: the problem and the solution. Often in the papers I read (and write) the two pieces come from the same author, who frames the problem, describes the procedures, and presents the solution all in one paper. It’s difficult under those conditions to avoid warping the definition of the problem to make the solution look better: after all, you’re not just persuading the reader that you’ve done useful science, but the publishers and reviewers of that paper, and to some extent convincing yourself and your teammates. I think that using another person’s paper as a problem definition can help keep you honest -- it helps outline the work a little better, you can’t rephrase it or reframe it in convenient ways.
Oh, and here’s another way to improve the discussion of science.
Friday, March 19, 2010
This was a bit of an experiment. I printed out the last draft, went through it with a pen -- then instead of opening up the original file, I typed it all back in. The theory here is that this is what writers used to have to do in the bad old days of typewriters and hand writing and clay tablets and oral tradition (that last one probably got ugly when you were really embarrassed about a draft). The results were mixed.
On the plus side, I had to give everything at least some attention. Having just finished a read-through, I had the whole thing in my head and I knew exactly where it was going. This was a great help in terms of deciding what clues had to go where, and which bits weren’t pointing in the right direction. I think that the result is a smoother piece of work. Also on the plus side was that I was more willing to junk large sections of text that I hadn't recopied yet. Heck, the laziness factor probably saved me a few hundred needless words. This was particularly true at the end: I had never been happy with the last two sections, and on retyping I just balked at doing all that work on something I thought was sub-par. This prompted me to produce what I consider a much better ending.
On the minus side, it was not nearly as helpful as I thought it would be in terms of making structural changes. Part of this was my failure to think ahead and put page breaks between sections, to see how things read in a different order. As a result, I focused far more on tactics rather than strategy, and had to go back through later to make the more sweeping high-level changes that the story needed. Also on the minus side was the fact that retyping was an opportunity to introduce new and interesting typos.
Bottom line for this experiment: It's a worthwhile thing to try, but only when I'm alread very happy with a draft, but when I expect to have the time and energy to go back through it again. I am not sure that this would work well for a much longer work, but I may try it.
See also Lakeland’s analysis of the effect of education on mortality rates to zombie invasion. Basically, it doesn’t help. And really, that should have been predicted, because the last thing you need in a zombie attack is MORE BRAINS! No, the math plainly shows that we can be saved only by scantily-clad teenage zombie killers. You can’t argue with math, people. For Christ’s sake, there are graphs! GRAPHS!
Edit: Speaking of zombies, check out this abomination against nature. The dead walk again!
Tuesday, March 16, 2010
Sunday, March 14, 2010
If my demands are not met, your hours will not be returned to you. Nor will the time it took you to read this post.
Hah! Hahahah! MWAHAHAHAHAHAHAHAHAHAAaaaa...
Tuesday, March 9, 2010
Anyway, I’m still catching up on work and other things, and getting ready to post the next few entries on Christie’s Labors of Hercules (as augmented by a book I just finished containing copies of some of her notes for those stories! Excitingly intrusive!), but in the meantime, go read about crash blossoms: newspaper headlines just ambiguous enough to bring our reading comprehension to a screeching halt. (The term originates here)
Saturday, March 6, 2010
If anyone wants to give it a read and offer feedback, let me know. I think I’ve plugged all the logical leaks -- and yes that’s a challenge!
Wednesday, March 3, 2010
They’re all very cool, representing remarkable technical skill, and probably many long nights in the lab. But I think the reporter is missing the point of those humanoid chef-robots, judging by the juxtaposition of those with the work of the CMU Human-Robot Interaction team.
Allow me to explain in a roundabout way: There are already machines that make ramen (maybe not ones that also have knife-fights, but bear with me here) and otherwise perform many of the other tasks here. For many individual tasks, the use of a humanoid robot or arm robot represents a lack of imagination: the mental agility to imagine how a task would be performed with an unrestricted body type often comes up with far more ingenious and efficient ways of doing it.
In fact, I’d say that very few tasks really require humanoid robots. I can’t think of any off-hand. For any individual task, there will almost always be a better form. But this is not to say that it is a bad idea to develop humanoid robots, far from it. The promise of a humanoid robot, and ultimately the (proper?) motivating factor behind many of these prototypes is the same as the promise of an iPhone or something of that ilk: A flexible device that seamlessly becomes one of any number of other single-purpose devices. This is distinct from a personal computer in some important ways, but right now the primary importance is of *doing* one thing at a time (whatever else it may be *thinking*, if you want to put it that way). By adding more cooking jobs to the general robotic repertoire, they’re converging on a suite of tasks for which the humanoid form probably is better-suited.
Microsoft Robotics also kind of gets this, I think. They ought to, anyway, as it is an analogue to Microsoft’s original strategy and success: of standardizing the slow part (the hardware) to focus on doing as much as possible with the fast part (the software). A humanoid robot (or more simply a single arm) can mechanically do just about any task they might desire (if inefficiently), so if we standardize on that ideal, the software and the logic can take a more central place. It represents a sort of design convergence: when you try to combine tasks into the simplest possible hardware, the more human tasks you add, the more human the hardware is going to look.
As for the people focused on human-robot interaction, there are interesting research questions there, and good science being done. But that research, to my mind, is not so much robotics research as it is human research with some very difficult test equipment: kind of like when zoologists design puppets that baby animals will feed from. (I really wish I could find a copy of a particular Calvin and Hobbes to link to here. It’s in “There’s Treasure Everywhere”, page 148)
Anyway, that’s my two cents on the subject. (And keep in mind that I’ve never actually done humanoid-robotic research, having focused entirely on rover-types, so I could be totally off-base)
Oh! If after reading that article you’re wondering what okonomiyaki is, by the way, it’s often referred to as a cabbage pancake or pizza. It’s... neither, really, or maybe both. I’m familiar with Osaka-style okonomiyaki, but as anyone will tell you, it can vary wildly, especially by region. For me, the little okonomiyaki-ya outside my dorm at Gaidai is the only true form: You take a batter of flour, potato starch, egg, and shredded cabbage, and spread it out on a hibachi table for some high heat, usually spread on top of some kind of meat filling like bacon or shrimp. Flip it once (so the ‘filling’ is now on top), finish cooking, then top it. The traditional toppings, to my mind, are a thick sugary sauce (like yakisoba sauce or BBQ sauce), Japanese mayonnaise, bonito flakes, and powdered seaweed. It’s... tastier than it sounds? I like it, anyway.
One final thing: I’m trying out new blog software -- MacJournal 5 from the most recent MacHeist. The interface isn’t too bad, and I do like the ability to keep separate journals in the same interface, plus locally-organized stuff: one of my big complaints for my current writing software is that it’s difficult to manage multiple projects.
Tagging seems to be more difficult compared to the web form, which autocompletes and shows me a list of tags I’ve already used.