apocryph.org Notes to my future self

15Nov/1178

How I got Dropbox installed on Kindle Fire

I just my new Kindle Fire, which runs a modified Android Gingerbread.  One of the ways the Kindle Fire is different from most Android devices is it does not have the Google Android Marketplace; instead apps must be obtained through Amazon’s own app store.  As of this writing, the DropBox app is not available in the Amazon App Store, but I was able to get it installed on my Kindle Fire anyway with the following steps:

  • Click the gear icon in the top-right corner of the Kindle Fire home screen to activate the Settings window
  • Click More
  • Scroll down to Device
  • Scroll down to ‘Allow Installation of Applications from Unknown Sources’ and move the slider from ‘Off’ to ‘On
  • You’ll be prompted to confirm you mean to do this; you agreee
  • Switch to the Kindle web browser and go to https://www.dropbox.com/android (NOTE: the ‘https‘ is important!), and click Download App.  This will download the Dropbox .apk file
  • When the download is finished you’ll see a notification in the top left corner.  Click it and scroll down to the notification that the download of the Dropbox .apk file is complete.  Click the file.
  • You will be prompted to install the application.  Click the button
  • Once installation is completed, start the Dropbox application and log in.
  • Done!

If you found this post helpful, why not get some useful accessories for your new Fire?  If you buy through these links, I get a percentage at no cost to you!

Filed under: Uncategorized 78 Comments
29Oct/111

How I got Hubot deployed to heroku

UPDATE: Since this writing, hubot 1.1.6 has been released on the Hubot download page.  You should use that instead of the 1.1.4 tags hack I describe below.  The latest tarball from the downloads page has the Procfile, and NPM dependencies on hubot-scripts, so it will automatically pull in the latest hubot-scripts repository without any additional work on your part.  You should NOT downlaod the tag tarball as I describe below.

As soon as I heard Github had open-sourced Hubot, I knew I had to have it for our dev team’s Campfire account.  Hubot can be hosted on Heroku’s Cedar stack, but getting it working involved some glitches, so I’m writing them down for posterity.

I did this deployment from my Mac, running OS X Lion, but any Linux flavor should work too. You’ll need git installed, and the Heroku command line tools should be installed and configured with your Heroku account. You can use a free account to run hubot, but you still have to register with Heroku.

First step is getting the Hubot bits.  As of this writing, the version linked from hubot.github.com is 1.0.6, which is hopelessly old.  I had to get version 1.1.4 in order to deploy successfully.  First check the downloads page for the latest release; if 1.1.4 or later is there just use that.  However as of right now 1.1.4 isn’t up on downloads yet, so I had to get it from the 1.1.4 tag.  If you use the tarball from the tag, it will not have the Procfile which Heroku needs, so you’ll have to provide that yourself; that’s covered below.

Once you have the tarball, extract it.  It will unpack into a folder ‘hubot‘. Make sure there is a file called ‘Procfile‘ in the hubot directory. It should look like this:

app: bin/hubot -a campfire -n Hubot

This tells Heroku how to run Hubot. The 1.1.4 tarball I downloaded from the 1.1.4 tag on github did not have this file, but the official tarballs from the downloads page does.

Next, you need to push this code to Heroku. Run the following commands from the hubot directory:

git init .
git add .
git commit -m "Initial hubot deployment"
heroku create --stack cedar
git push heroku master
heroku addons:add redistogo:nano

The git push command will deploy the code to Heroku. If it fails, make sure you have the latest Hubot tarball; 1.1.3 doesn’t deploy correctly. I used 1.1.4 with great success.

You’re not done yet. Hubot isn’t actually running; you’ve just deployed it to Heroku servers. The next step is to configure Hubot, so it knows how to connect to Campfire. Hubot is configured through the use of Heroku environment variables, so the configuration is done by adding environment variables to your Heroku app. Back to the command prompt:

heroku config:add HUBOT_CAMPFIRE_TOKEN="your token"
heroku config:add HUBOT_CAMPFIRE_ROOMS="your room IDs"
heroku config:add HUBOT_CAMPFIRE_ACCOUNT="your subdomain"

‘your token’ is the API token of the Campfire user which Hubot will log in as. You almost certainly want to create a dedicated user account for this purpose. To get the API token, log into Campfire as the hubot user, click on ‘My Info’ in the top right corner, and copy the API token displayed on that page. An API token is a long string of letters and numbers, like ‘asdf798792481970s98f7sfasdfgkhetge2′.

‘your room IDs’ are the ID numbers of the Campfire room(s) in which Hubot will hang out. To determine the ID for a given room, look at the URL of the room; it will look something like https://mysubdomain.campfirenow.com/room/399650. The ID for that room is 399650.

‘your subdomain’ is the subdomain of your Campfire account. If your Campfire instance URL is https://foo.campfirenow.com/, then your subdomain is ‘foo’.

Once your instance has been configured, you just need to start it up. That’s one last Heroku command:

heroku ps:scale app=1

That should be it; you should see Hubot log in to Campfire as the user whose API token you provided. Test it out by entering a room Hubot is in and typing ‘hubot help’. Hubot should respond with a list of commands.

If something doesn’t work try the ‘heroku logs’ command to dump the logs from the app.

Tagged as: , 1 Comment
26Apr/115

Samsung monitors with ‘MagicTune’ – Stay away!

I recently bought a new monitor for my home office.  This time I splurged and got a 27″ Samsung SyncMaster SA350 from NewEgg.  It’s 27″ and 1920×1080 pixels of awesome.  Or, so I thought.

Unfortunately, I didn’t know about MagicTune.  MagicTune is the software Samsung expects you to use to make adjustments to the monitor; things like brightness, contrast, etc.  I just assumed the version on the included CD was ancient, and downloaded the latest version for OS X, 6.0.11.  Little did I know, I was entering a world of pain.

My first clue should have been that OS support listed up to OS X 10.5.7; since I’m running 10.6.2 that means I’m off the reservation.  However, undaunted, I proceeded.

I installed the software without event, though it required a reboot which I found odd.  Upon rebooting I launched MagicTune, only to find…it didn’t do anything.  It would only display the Help tab, with links to go to the support website, and an OK button that didn’t work.

I then googled around and discovered that MagicTune is so poorly implemented on OS X that it’s for all intents and purposes not supported.  I was irritated, but I switched to my Windows 7 laptop instead.

Now things got worse.  I tried multiple versions of MagicTune, including 4.0.13 and 4.0.10, to no avail.  MagicTune would not start, and I’d see a MagicTune.exe process but no GUI.  Once I let MagicTune run for several minutes and finally got the MagicTune GUI, complete with a message informing me my system was not supported.  At the time I was using a DVI-to-HDMI adapter (the SA350 only has VGA and HDMI inputs; what more do you need?), so I then switched to VGA.  Finally!  I was able to access the MagicTune options and adjust brightness to my liking.

Well, sort of.  You see, the VGA settings a very different than the HDMI settings, so when I switched back to the Mac via HDMI, the settings were different, and every time I try to access the built-in on-screen display with the MENU button on the monitor, it just displays a message, mocking me in my foolish choice to purchase a Samsung, with the words “MAGIC TUNE”.  I tried uninstalling the infernal MagicTune software on OS X, and even searched the entire volume for any trace of MagicTune or its MonitorControl_drv.kext driver, and found nothing.

So now I have a bright, shiny 27″ HD display, the brightness or contrast of which I cannot adjust because some bright Samsung engineer figured there’s no possible way MagicTune could fail to work, and therefore saw no reason to provide the ability to bypass it and access the built-in OSD.  Curse you, Samsung!

7Nov/100

SVN repository migrated to GitHub

I finally got around to migrating all of my personal projects from my SVN repository into GitHub.  I’ve been a fan of DVCS for years now, but never got around to taking the plunge for my personal projects.  That’s finally changed now.

I’m leaving svn.apocryph.org up but it will no longer be updated.  All future personal projects, and updates to existing ones, will henceforth be committed to a Git repository at GitHub.  If you’re interested in that sort of thing, go to my GitHub profile.

Tagged as: , No Comments
17Oct/100

September 2010 Kyiv Trip

This past September I spent a month living and working in Kyiv, Ukraine.  My company sent me to Kyiv to work with our outsourced development partner there on various product and quality initiatives.

More important for the purposes of this post are all the photos I took during my down time there.  I’ve broken them down by activity.

Shooting Trip

See Photos.

I went with Vadim and Rustam of Softheme to Sapsan Sport outside Kyiv for trigger time on assorted weapons.  See the photos for details.  Not included in the photos is my first attempt at shooting skeet.  It was great fun, though I was quite terrible at hitting the clay pigeons with any consistency.

Walking around Kyiv

See Photos

Next I just walked around Kyiv on my own, exploring some of the areas I didn’t see last time I was here.  I went down to the bank of the Dniepr, across the pedestrian bridge to the hydropark, then back up the hill to Lenin’s House and the park behind it.

Kyiv Pechersk Lavra

See Photos

Next I went to Kyiv Pechersk Lavra to see the famous cave monastery, which is a man-made cave complex dug into a hill where Orthodox monks lived, and in some cases were buried.  Just like the Roman catacombs, photography is not allowed, so none of my photos will include the actual caves themselves, but sufficient it to say it was a very cool experience.

The caves were not as dark and raw as the Roman catacombs, but they are also an active holy site for Orthodox Christian believers, so the caves were crowded with worshipers paying respects to the remains of long-dead saints.  After the fall of the Soviet Union, the mummified bodies of many of the monks who died in the monastery were recovered, dressed in ornate vestments, and displayed in the caves.  The bodies and heads of the mummies are mostly covered, however an odd hand or two is sometimes visible.

Most striking when viewing these remains is how small they are; I doubt many of the men were more than 5″ tall.  Presumably nutrition and health in 10th century Kyiv left a bit to be desired.

After the caves we toured the above-ground monastery complex, of which I took plenty of pictures.

Chernobyl

See Photos

Yes, I was actually able to go to Chernobyl!  Irina with Softheme found and English-language tour group going to the Chernobyl Exclusion Zone and we were both able to get spots.  Access to the zone is regulated, so the only way in is with a tour group.  Those of you who know me at all know I hate doing touristy things like participating in tour groups, but for Chernobyl I of course made an exception.

The tour was comprised almost entirely of Dutch and German tourists, rounded out with a few Americans.  My don’t-fucking-talk-to-me defensive grid was fully operational, so I was able to avoid almost all interaction with the strangers in the tour group.

After a 2-3 hour drive from Kyiv we arrived at the exclusion zone.  I took plenty of pictures so you can get a sense of what it was like, but the pictures do not convey two interesting elements:

First, how quiet it was at times.  This was most obvious in the primary school in Pripyat, but I noticed it elsewhere as well.  It’s not that the exclusion zone is dead; there are wildlife and insects living there, and of course people venture in as well, but the desolation of Pripyat seems to quiet the area around it somehow.

Second, there’s a sense that some of the scenes in Pripyat are staged for the benefit of tourists.  Irina picked up on that as well, and being Ukrainian it’s pretty likely she’s right.  That’s OK I suppose, but I think the experience would be interesting enough without, for example, an abandoned children’s shoe set out on the floor in the school building for gullible tourists to regard as profound, moving, or iconic.

Gaming aside, the Chernobyl trip was awesome.  As is usually the case, photos are no substitute for being there.

National Museum of the Great Patriotic War

See Photos

Finally, I visited the National Museum of the Great Patriotic War.  This museum was opened in the early 80s and houses exhibits from the period of the “Great Patriotic War”, which we know as World War II.

The museum is actually a large park built around the Motherland Monument, an iconic statue which overlooks much of Kyiv.  Irina was able to get us us on a tour up inside the statue, climbing up the statue’s left arm and standing at the top of the shield, dozens of stories above the ground.  It was an awesome experience, and I got some very cool pictures as well.  This was probably the most exhilarating part of my entire trip.

Summary

This was my second trip to Kyiv, so I knew more or less what to expect.  I’ve enjoyed both of my visits to Kyiv (thanks largely to the hospitality of my hosts at Softheme), and would encourage anyone bored of the tourist-riddled Western European capitals to try something a little more adventurous and come out to Kyiv instead.  I certainly look forward to the next time my travels find me there.

Filed under: Uncategorized No Comments
13Oct/102

“Zero History” is post-cyberpunk William Gibson’s best work yet

I just finished reading William Gibson’s latest, Zero History.  Review follows.

I’ve been a Gibson fan since Neuromancer coined the term ‘cyberspace’ and founded the cyberpunk genre years ago.  If his early work now seems dated and cliche, it’s only a consequence of the pervasiveness of Gibsonian imagery in our modern conception of digital life.  As one of the “early adopters” who discovered both cyberpunk and planet-wide IP networks when neither were particularly fashionable, I recall those early years with a mix of nostalgia and a “before it was cool” conceit.

Given my sweet spot for early Gibson, it’s probably no surprise that his later work, Pattern Recognition and Spook Country, seemed to be missing the distinctive gritty futurism I’d come to know and love.  Gibson’s latest, Zero History,  takes place in the same universe and follows the same storyline, so I should have found in it another good-but-not-great step off the cyberpunk path.  And yet, I loved it.

I suspect the root of my affection for Zero History isn’t my more mature, cultured sensibilities, nor Gibson’s growth as a writer.  Rather, it’s Gibson’s delightfully unexpected incorporation of Internet gun culture into what is otherwise a refined, urbane, literary novel.

I haven’t the words to explain just how much I enjoy subtle references to gun board memes hiding in plain sight among Gibson’s presumably liberal hipster dramatis personae. Reading Hubertus Bigend, the mischievous and possibly malevolent owner of Blue Ant and prominent figure in Gibson’s last three novels, define “mall ninja” was a rare treat.  Milgrim’s reflection on foliage green versus coyote brown seemed to be channeling Tam’s tactical snark for a wider audience.  There’s even a bonus RPK towards the end.

Given the extent to which Gibson has shed his cyberpunk roots, to file Zero History under ‘science fiction’ seems disingenuous; ‘literary fiction’ is perhaps more accurate.  This has the advantage of widening its potential appeal.  I would encourage fans of classic Gibson to give his latest a fair shot, I would direct anyone who recognizes the name ‘Gecko45′ to pick up Zero History post haste, and I think educated, snobby readers of fine literature might give it a read and see what happens.

In other words, strongly recommended.

Filed under: Uncategorized 2 Comments
5Aug/1017

Installing Ubuntu Lucid Lynx (10.04) on Dell Precision M4500

I decided to switch my new work laptop, the Dell Precision M4500, from Windows 7 Ultimate to the latest Ubuntu release, Lucid Lynx (version 10.04).  This turns out to be…nontrivial.

You see, my laptop has the NVidia QuadroFX 880M graphics card, which means trouble.

Due to a known bug in the kernel mode setting feature in newer Linux kernels (here and here), the GUI installer on the Ubuntu 10.04 install CD doesn’t display correctly on my M4500, appearing instead as a blank screen.  Control-Alt-F1 doesn’t change that either.

Here’s how I got around it:

First, just as the Ubuntu install CD starts to boot (right after ‘ISOLINUX’ flashes on the screen), I hit Escape.  This prevents the GUI installer from launching, and instead displays the screen I’m used to from the Ubuntu 9.x installers, with operations like ‘Try Ubuntu without installing’ and ‘Install Ubuntu’.  I pressed F6 to display the advanced options, and selected ‘nomodeset’ from the list.  I then hit Escape to get out of the advanced options list, and selected ‘Install Ubuntu’ to launch the installer.

I proceeded through the install uneventfully, until it came time to reboot into the new Ubuntu install.  Just after the BIOS boot screen disappears, as Ubuntu is starting to boot, I held down Shift for a few seconds, which brings up the GRUB boot menu.  I selected the default boot option, then pressed ‘e’ to edit the boot configuration.  I navigated to the line that ended ‘quiet splash’ and added two additional words, ‘noapci’ and ‘nomodeset’.  ’nomodeset’ disables the Kernel mode setting feature that is so problematic for my card.  On my machine if I didn’t also specify ‘noapci’ the machine kernel paniced during the boot processs.

Once the boot configuration was edited I pressed Control-X to start booting.  It booted into Ubuntu without further incident.

Once in Ubuntu, I sent to System | Administration | Hardware Drivers and switched to the latest proprietary NVidia drivers (IMHO, the open source Nouveau NVidia drivers are simply not ready for prime time).  I waited for these drivers to download and install, then rebooted.

After the reboot, I no longer needed to disable KMS; I can only assume the proprietary NVidia drivers don’t suffer from the same problem.

Filed under: Uncategorized 17 Comments
31Jul/101

Completed first AR build

Thanks to a Brownell’s gift certificate generously provided by my parents for my 30th birthday, I finally acquired all the parts I needed to complete my first AR-15 build.  I’ve owned and shot AR-15s for years, but never had I built one of my own.  At least, not until today:

From BCM Mid-length AR build

If you’re interested in such things, here are the particulars:

  • Defensive Edge SLR15 stripped lower
  • RRA lower parts kit
  • Bravo Company Manufacturing (BCM) Mid-length 1/7 complete upper
  • BCM Full Auto bolt carrier group
  • BCM charging handle
  • Daniel Defense Omega Rail 9.0 mid-length handguards
  • MagPul MIAD grip
  • VLTOR mil-spec receiver extension tube
  • MagPul CTR collapsible stock
  • MagPul ASAP receiver extension plate
  • MagPul PMAG30 magazine
  • Brownells castle nut
  • Superior Shooting Chromoly/Silicon flat buffer spring

Not pictured but on the rifle now:

  • MagPul MS2 sling
  • EOTech 512 reflex sight

Not obtained yet but destined for the rifle:

  • BCM GunFighter charging handle
  • MagPul MOE rear BUIS

I used the Brownells AR-15 instructional videos on the Brownells web site to help me understand how the parts went together.  I didn’t have the lower receiver vise block, but all the other AR build tools were at my disposal.

I was quite surprised how easily everything went together.  Once I figured out which springs and roll pins in the lower parts kit went where, it was just a matter of watching the videos and following along.

Already I’m thinking about other builds.  I have two more LPKs, four stripped lowers, and one more BCM mid-length upper lying around.  I’d like to do something with one of the Spike’s Tactical mid-length .22LR uppers, plus I’ve often wondered what a 9mm build would be like; I have the C-Products 9mm mag block and 10 30rd 9mm mags just waiting for a gun to go in.

Much like PCs, the AR-15 platform is so modular and versatile you can build just about anything you want given sufficient funds and available parts.  Now that I see how easy AR builds can be, I’m sure to build at least a few more…

Filed under: Uncategorized 1 Comment
23May/104

Getting Reg-Free COM activation working between managed and unmanaged DLLs

At work I’m transitioning the logging framework used by one of our products, from log4cplus to the .NET logging framework NLog.  Since an increasing portion of our product code is in C#, using a .NET native logging framework made sense.  However, we still have a large body of legacy C++ code that I’m not willing to modify, so I needed some way of exposing the NLog logging code to the legacy codebase.

To simplify the situation somewhat, here are the components of my solution:

  • aadiag - A C++ DLL that exports methods for logging.  All of the legacy C++ code does its logging through this DLL
  • logging - A C# DLL that wraps NLog and is used by our C# code for logging.  This DLL uses COM Interop to expose a COM coclass, ComLoggerFactory, to unmanaged code
  • service - A C++ EXE that implements a Windows service of some kind

I modified aadiag so that it would instantiate and call the COM interop coclass in logger in order to perform its logging. That worked fine, except for one hiccup: I had to remember to register logging.dll. If I forget (and I do often since registration requires admin rights which I don’t generally run with), all the C++ logging fails.

To solve this problem I started researching Registration-free COM. Unfortunately, most of what I read did not address the use of COM interop, and assumed a simplistic architecture in which an EXE maintains one or more dependencies on in-proc COM coclasses. Since in my case I have an unmanaged DLL (aadiag) that depends on COM coclasses in a managed DLL (logger), the fairly straightforward instructions I found were useless.

Here’s what I did instead, broken down by component:

Logging (Managed DLL)

I added a new item to Logging’s Visual Studio 2010 project.  I used the ‘Application Manifest’ item template, but I threw out the entire contents the template generated for me.  Here’s the manifest I used instead:

Note the clrClass elements; they are the critical component. The clsid property is the value of the Guid attribute on each class; the name is the fully-qualified type name.

That creates the manifest, but it won’t automatically embed it in the DLL. In the project properties, the Application tab has a convenient dropdown where you choose which manifest to embed. One problem; it’s always disabled for Class Library projects. You’d think that would be a bug, right? That’s what this guy thought too. Only problem is, MSFT says this behavior is by design!

I imagine the Visual Studio team responsible for this particular tab couldn’t think of a single reason why anyone would want to embed a manifest in a DLL. After all, most of the features controlled by a manifest either don’t apply to .NET assemblies, or are only relevant in executable. Well, guys, here’s a reason I bet you didn’t think of: registration-free COM!

Fortunately, they did a pretty half-assed job of disabling manifests for class libraries. It’s only disabled in the GUI; if you edit the .csproj file manually you can still configure a manifest to embed. Just open the .csproj file in a text editor, and find the first PropertyGroup element. Insert this line at the end of that element:

<ApplicationManifest>app.manifest</ApplicationManifest>

Be careful with this, though; if anyone fiddles with any settings on the Application tab in the future, this gets blown away.

AaDiag (Unmanaged DLL)

First, aadiag needs a manifest too, which describes its dependency on the Logging DLL. I just created a new ‘Text File’ called ‘app.manifest‘, added it to the project, and filled it in. Unlike the C# projects, .manifest files are automatically merged into the manifest generated for the DLL. The contents of app.manifest are pretty simple:

That’s the easy part. The hard part is creating and activating a custom COM activation context which uses the manifest we just created to help resolve COM coclasses. In my case, there was one place in the code where I created the ComLoggerFactory coclass, so I only had to to this once. Your code might have more places:

And that’s pretty much it. Now, as long as Logging.dll and aadiag.dll are in the same directory, things Just Work. Too bad MSFT seem to go out of their way to make this hard.

Filed under: Uncategorized 4 Comments
18May/104

The Code Analysis feature in Visual Studio 2010 Sucks

I’m leading the switch from Visual Studio 2008 to Visual Studio 2010 at my company.  In addition to the usual migrating of countless projects and fixing niggling errors, I’ve thought it would be cool to use the Code Analysis feature built in to VS 2k10 to automatically run our new C# code through a set of rigorous checks for bad code smells and possible bugs.  From a policy perspective it was really easy; I made a new ruleset comprised of most of the MSFT rules which ship with the product, then configured the C# projects to perform code analysis with my custom ruleset file.  Then the FAIL started.

First, I wanted Code Analysis warnings to be included in the ‘Treat Warnings as Errors’ feature of the compiler.  That’s not possible, so instead I had to change every warning in my ruleset into an error.  Nice.

Next, I wanted to get existing code under analysis even though it generates tons of warnings, by excluding all existing files in my legacy projects.  This way, as files are added they will be checked, but existing files don’t have to be fixed all at once.  Turns out, you can’t do that.  The rules are the rules, for every file in the project.

After that, I brought a simple NUnit test suite project into the Code Analysis fold.  The project was dead simple, so I didn’t expect any Code Analysis violations.  Little did I know.  You see, most (actually, all) of my unit tests are public non-static methods which do not access any instance methods or variables.  Code Analysis issues warning CA1822, pointing out that the method could be made static for better performance.  Normally this is 100% correct and I want to be alerted to it, but these are unit tests.  They MUST NOT be static.  Thus, I decided as a matter of policy that CA1822 would be ignored for unit test projects.

Only, that’s not how it works.  The SuppressMessage attribute suppresses ONE instance of a warning.  If you have a code file with 100 warnings, there’s no way to suppress all warnings in that file.  None whatsoever!  I can only imagine the conversation between preening code bureaucrats at MSFT:

“We should probably have a way to disable some of these warnings on a file or space basis”

“Why?  Then stupid users will just disable the warnings instead of addressing them.  Let’s make them create a separate SuppressMessage attribute for every single violation.  It’ll make their code look like shit, and will piss them off so much they just quit using Code Analysis entirely”

“Prescriptive, simplistic, crude, and ultimately useless.  I like it!”

As a result, we have a feature that should’ve been integrated into Visual Studio long ago, implemented in such a way as to be useless to most real world software projects.

Filed under: Uncategorized 4 Comments