Thoughts on the Moves Privacy Policy

For a while, I've been using the Moves app for iOS. It's a little application that uses the accelerometer and GPS data from your phone to tell you where you've been and how many steps you've taken and so on and so forth. I've been using it in no small part because of their strong third-party privacy policy, which said:

We do not disclose an individual user’s data to third parties unless (1) you have given explicit consent to each such disclosure, (2) we are required to comply with a legal obligation or (3) if our business or assets, or parts of them, are acquired by a third party.

Unfortunately, as you may know, Moves was acquired by Facebook last month, and I'm sorry to say that their stance on user privacy has not improved. Today, Moves updated their Privacy Policy, and it's not good stuff.

For better or worse1, Moves (like many companies) does not post diffs when they change their privacy policy. So I'm doing it for you. I extracted the current privacy policy (as of 2014-05-05) and the previous one (edited 2013-09-17) from The Wayback Machine, then reformatted them as Markdown. You can view them at https://gist.github.com/Roguelazer/7e59bb615c3e5a38b036; the diff itself is at https://gist.github.com/Roguelazer/7e59bb615c3e5a38b036#file-september_versus_may-patch.

Important Disclaimer

I am not a lawyer. If you think you may be affected by the changes to this legal document, you should consult with your attorney. Please don't cite me in court or sue me over interpretation. This document does not consitute legal advice and is for entertainment and outrage purposes only.

Interpretation of Changes

The biggest change, to me, is the fact that Moves is now reserving the right to share all of your data with (roughly) anyone at any time. The relevant clause:

We may share information, including personally identifying information, with our Affiliates (companies that are part of our corporate groups of companies, including but not limited to Facebook) to help provide, understand, and improve our Services.

Also interesting to me is the following passage which was removed:2

We will not display or otherwise disclose information where individual users can be recognized. Furthermore, our developers need to occasionally review raw data and the results for recognized activities/routes/places to improve the system. They will only see the unique identifier number with the data.

As far as I can tell, Moves is being set up by Facebook to monetize, share, and potentially leak your personal movements, and to inherit up Facebook's famously-shoddy isolation of user PII.

Having read this document, I have removed the Moves application from my phone. If anyone is aware of any personal-awareness-movement-tracking apps which promise not to sell your location to the highest bidder, please let me know. And if you still have the Moves application on your phone, well, I hope you get a chance to take a look at the detailed changes to the Privacy Policy and make an informed decision.

1

Who am I kidding: for worse. Any company that does this is scummy and untrustworthy and, unfortunately, is also every single company I can think of (including the one I work for).

2

This is of particular interest to me because it was one of the best such clauses in the industry. A lot of companies (including Facebook) do not do anything to prevent developers from viewing your most personal information, and there have been some rather hushed-up scandals related to that. I would love to live in a world where developers take the time to do their jobs without looking at your personal travel logs or selfies. It's laziness and some slavish adherence to "agile" which prevents companies from embracing this philosophy, and it's definitely one of my pet peeves.

TeX is Huge

I was installing MacTeX on my MacBook Pro today and had an amusing realization. First, some background: for those of you who don't know, TeX is a phenomenal family of typesetting programs originally written in 1978 by two of the giants of 20th Century computer science, Don Knuth and Guy Steele. Most people now use it in conjunction with a slightly more modern set of extensions called LaTeX released in 1981 or so. I used TeX/LaTeX to typeset several thousand pages of homework and other assignments in college.

Now, in early 2014, the download for the OS X distribution of TeX+LaTeX is 2.3GiB, and it actually occupies about 3.5GiB of disk space when installed. How does this compare to 1978? Well, one of the cheapest options for storage in 1978 was the DEC RK05, a gargantuan 2.5MiB cartridge disk drive, which cost $7,900 for the drive and $99 for each disk.

To store the installation of MacTeX-2013, we'd need 1,434 of these disks. This would cost $149,866 ($543,133 in 2014 dollars) and would form a cylinder 14" in diameter and 358' tall1, which would weigh about 100,000 pounds2. Based on some cursory googling, this seems like it'd be a stack of disk cartridges roughly as tall as a 25-story skyscraper and weighing about as much as 10 African bull elephants. Also, apparently the cartridges have embedded read/write head magnets and will erase one another if left in close proximity3, so that stack would be a terrible way to store your data.

I wonder what Knuth and Steele think of the fact that their little typesetting software would be the largest building in a good fraction of the cities in the world?

At least it's still cheaper than San Francisco real estate.

1

In this image, the cartridge is 77 pixels tall and the image is 245 pixels tall. According to this table, the entire assemblage is 10.5" tall. Multiplication yields 3.3" for the cartridge, and since DEC tended to like round numbers, I'm going to assume that it's actually 3" per disk. Multiply that by 1,434 disks, and you get 358 feet.

2

This article indicates that the later, lighter RL02 drive cartridges weighed 70 pounds each

3

RK05 Disk Drive Maintenance Manual section 2.4 "Cartridge Packing and Shipping"

Alfred + dc

I use Alfred 2 a lot on OS X in order to get things done. It doesn't completely change how I use the operating system, but it comes close. However, one of my pet peeves about it has always been that the built-in calculator is pretty terrible (even with the "advanced" equals-sign calculator). I realized this morning that I could fix this, and, lo, the dc alfred workflow was born.

It just takes its input and runs it through the dc command-line utility, giving you a fully-programmable RPN calculator. It's not quite as great as PCalc, but it's just a ^-space away. If you have Alfred, you can just install the workflow and then use the script filter "dc" to run your math commands. Example:

dc alfred workflow screenshot

Interesting SSL Issue

Shortly after I upgraded to OS X 10.9.2, I was connecting to battle.net, and I got an SSL error. At the time, I didn't think anything of it (after all, sites have bad SSL certificates all the time). However, I noticed it again today when looking at the page for Reaper of Souls, and decided to look into it again. When I did, I found something very unusual: my system has a second copy of the DigiCert root CA certificate in the "login" keychain. For those of you who aren't familiar, OS X uses a hierarchy of binary key/password databases called "keychains" to store sensitive materials. Generally, Root CA certificates are only found in the Trusted Roots keychain; the "login" keychain (which is a per-user keychain writable without root privileges) is only used to store passwords and other application-level data.

read more

Pebble Steel First Impressions

I've had an article sitting in Draft status since June 2013 about the Pebble smartwatch which I bought during their Kickstarter campaign. The article essentially said that the Pebble has awesome features, but feels like a toy and scuffs if you look at it askance. I was planning on going into detail about how apps like httpebble and smartwatch+ feel immensely hackish.

Well, as of today, I'm confident reporting that Pebble has resolved all of these issues with the Pebble Steel and Pebble OS 2.0.

read more

I Got Sick

As I alluded in my last post, I've had a fair bit of extra time on my hands for the last couple of weeks. That's because I've been quite ill. This post is the exciting story of what I've been sick with.

Starting in the evening of Friday, 2014-01-10, I had body aches and a fever. Now, at first, I didn't even know if anything was wrong — I'd slept poorly the night before, and maybe I was just feeling poorly because E was going back to school. Unfortunately, when I woke up on Saturday, 2014-01-11, the fever was worse, so I stayed in bed all day. I did't feel too terrible (I went to the grocery store and did my regular errands on Sunday, and even went to work on Monday), but I knew I was sick. I just hoped it was a cold or something else minor that would go away by itself. Unfortunately, on Tuesday, the fever was high enough that I knew I wouldn't be productive, so I had to stay out of work. Wednesday continued the high fever and brought intermittent sore throat (only on the left side, though, which was weird), which was enough for me to decide to go to the doctor.

Of course, like so many irresponsible young people in my age cohort, I didn't actually have a doctor. So I decided to sign up with One Medical Group, which is a pretty cool concept. For a nominal yearly fee, you get not only a doctor, but the ability to make same-day appointments to see any doctor in their network. A $15 copay and the ability to be seen by a GP-type doctor definitely beats having to go to the ER for minor illnesses, right? So, Wednesday night, I took an Uber out to Pac Heights and met with Dr. Ciccarone. He looked at me, looked at my throat, and told me that my left tonsil looked like a textbook picture of strep throat. Then he did a RADT, which was negative for strep. Hm. I guess he had some kind of hunch, because he poked me on both sides of my belly and asked me which one hurt more. Apparently, I answered the side that my spleen is on, and a sensitive spleen is diagnostically relevant for things that aren't strep throat. He still took a full strep culture, though. Then he told me to keep taking ibuprofen and report to the phlebotomist the next day to get blood drawn.

The next day (Thursday), I woke up feeling even more miserable (temperature hovering around 102°F / 39°C), took some more ibuprofen, and went to a different One Medical Group office to get my blood taken... which was quite painless.

Epstein-Barr Virus 1

Friday, a different doctor called me to tell me that the blood tests were positive for Epstein-Barr Virus, which causes a disease known as mono. Symptoms of mono include fever, sore throat, and fatigue. And, as a virus, there's next to nothing you can do about it except wait it out. So, over the weekend, I waited it out, while the fever and sore throat got worse. By Monday, I couldn't swallow solid food at all, and even liquids were challenging.

beta-hemloytic strep 2

Fast forward to Tuesday, 2014-01-21. I got another message from the doctor — apparently, the throat culture that Dr. Ciccarone had done last Wednesday came back for "β-hemolytic Streptococcus, not group A isolated", which is apparently a somewhat uncommon variant of strep, the symptoms of which include fever, sore throat, and fatigue.

Yep. I got both mono and strep. Which probably helps explain why I've been so miserable lately. The doctor prescribed Cefuroxime to wipe out the strep, and continuing rest for the mono.

Finally, after a frankly miserable two weeks in total, the fever finally broke on Friday, 2014-01-24, just in time for my coworkers to send me a care package containing delicious Wise Sons challah.

As of this posting, I have no idea how long it'll take for the rest of my symptoms to go away, but I'm just glad that the fever's gone and the sore throat has eased up enough that I can start eating actual food again and contemplate doing actual work.

1

(2005) Virus Proteins Prevent Cell Suicide Long Enough to Establish Latent Infection. PLoS Biol 3(12): e430 DOI: 10.1371/journal.pbio.0030430; Creative Commons Attribution 2.5

2

(1979) Photomicrograph of Streptococcus pyogenes bacteria, 900x magnification from the CDC. Public Health Image Library ID#2110; Public Domain

New Site, Again

Hello dear readers. If you can see this, then it means the new, redesigned <roguelazer.com> is up and running. I got tired of dealing with WordPress vulnerabilities regularly, and was somewhat embarassed to have a site running on PHP. After all, there isn't actually any dynamic content here, so why bother?

The site is now a bunch of Markdown files compiled into HTML using the Pelican framework.

Major wins:

  • Much faster, since it's just static HTML
  • Back to being standards-compliant HTML5
  • Behaves sanely on mobile, for the first time ever (thanks Bootstrap!).
  • Better Atom and RSS feeds (although they're at a new url...)
read more

A Rant on Redis

It's been a while since I posted, and I've been spending a lot of time fighting with Redis, one of the darling databases of the NoSQL era, at work, so I thought I'd grace y'all with a brief rant on Redis, what it's good at, what it's bad at, and so on.

What is Redis?

Redis is an open-source moderately-structured in-memory key-value store. This means that, unlike full relational databases, it doesn't have a fixed schema, and it can't perform server-side operations like joining and filtering data1, and theoretically it's faster. Redis looks an awful lot like memcache, but has support for basic data structures (lists and hashes), and can theoretically write it's data to disk instead of just keeping it in memory.

For these reasons, I am wholly 100% in favor of using Redis, as long as you use it strictly as a memcache replacement, a temporary cache to make your application faster or back non-essential short-lived-data features like password resets. It can be safely(-ish) restarted thanks to its ability to persist to disk, and the data structures make it a lot easier to organize code (I have definitely seen memcache instances where people emulated lists by having a bunch of keys named "keyname_1", "keyname_2", etc. It was not good code.).

What is Redis not?

Redis is not a persistent database. It has disk persistence (which I will go into at great depth below), but it can never operate unless 100% of its data fits in memory. Contrast this to traditional relational databases (like PostgreSQL, MySQL, or Oracle), or to other key-value stores (like Cassandra or Riak), all of which can operate with only a "working set" of data resident in memory, and the rest on disk. Given that ECC-RAM is still about $15/GB and even the fastest SSDs are about $2.50/GB, this makes Redis a very expensive way to store your data. It makes sense if you're going to be careful to ensure that the only data in Redis is hot data (which you might if you're using it as a cache), but it absolutely does not make sense for long-term, sparsely-accessed data.

Redis is also not highly available. It has replication (which I will go into at great depth below), but only barely. It doesn't have any real clustering support (yes, I'm aware of WAIT and find it unsatisfactory), doesn't have any multi-master support, and just really seems to not want to be used in a highly-available fashion. If you want a key-value store that does that, I suggest you look into Riak.

Disk Persistence

Right off the gate, one of Redis's biggest wins over memcache is its ability to persist to disk. It has two mechanisms for doing this: "RDB" and "AOF". RDB takes a snapshot of everything in memory and periodically writes it to disk in a somewhat optimized format; AOF takes every single statement ever run against a redis instance and writes them to a single file which it periodically optimizes to remove duplicates. These both have some pretty serious limitations that make me not recommend either of them for production settings if you can possibly avoid it.

RDB

In order to write its full snapshot to disk, RDB forks the main redis process and does a bunch of work on it. Now, this is all well and good on Linux where copy-on-write memory means that that fork should be relatively free. However, the "copy" part of copy-on-write does kick in if you're on an active server. Quite often with a moderately-loaded server, you can build up several additional GB of memory usage during an RDB write. So you'd better make sure to leave lots of RAM free.2

AOF

At first blush, the append-only-file (AOF) looks a lot like the binary write-ahead logs used for replication in standard relational databases (binlogs in MySQL, WALs in PostgreSQL). Basically, every time a command comes in, it's appended to the AOF log by redis. Then, if the server restarts, it can simply replay the entire log in order to get back to the state it was in before.

Unfortunately, AOF logs have what I consider a major problem: it's a single log of every statement since the beginning of time for the server. This means that if you have a lot of updates to existing keys happening (common when using Redis as a database), or a lot of keys expiring (common when using Redis as a cache), the AOF log will be many, many times the size of your database, and will take minutes to hours to replay — very bad if you're trying to recover from an outage.

Redis has a solution for this: BGREWRITEAOF. This causes redis to "optimize" the AOF, rewriting it to eliminate unnecessary updates and expired keys.3 Of course, since the single AOF log contains every statement since the birth of the database, this process takes unacceptably long on all but the smallest of databases, and tends to consume an inordinate amount of I/O.

The real problem with AOF is that there is no way to run it from a point-of-time which isn't the start of a server. You can't have an RDB at time x and then only keep AOF logs since x. There's no way to reasonably combine the efficiencies of RDB and AOF, despite the fact that every other database system has supported this behavior for decades.

Replication

Redis has asynchronous replication. If you run any kind of large product, that should warm your heart a bit — replication is the best way to build high availability and failure-tolerant systems. Unfortunately, Redis's replication is probably the most naive, useless form of asynchronous replication I've used — I guess it falls somewhere between "sqlite on an nfs share" and "postgresql 8.x".

Replication is naive

Redis implements replication by first issuing a SYNC command when a slave connects (which just does an RDB save and copies it over a TCP socket), and then streaming the AOF file. That's all. If a node loses network connection for a while, it has to copy the whole database again, which is almost never tolerable with non-trivially-small databases and multiple datacenters. Redis 2.8 attempts to improve this with the PSYNC command, which is the most naive thing — you just specify a buffer time of some number of seconds, and if slaves disconnect for less than that time, they'll replay out of the buffer instead of re-downloading the whole database. Oh, Equinix needs to perform power maintenance and one of your datacenters is going to be offline for a couple of hours? Too bad, I guess you'll need to entire buffer several hours of data in memory at all times on your master, or you'll have to retransfer all of your redis databases across the network. You should just live in the magical fairyland where nothing ever breaks!

Replication is un-monitorable

Would you like to know how far behind your slaves are? Well, you'd better implement your own heartbeating, because Redis doesn't expose that. The manual helpfully suggests that you can determine whether a slave is caught up to the master by seeing if the number of keys is the same on both. Because there are no operations in redis-land which can change data without changing the number of keys. There are a couple of replication-related fields in INFO, but they don't actually help if you're trying to figure out exact status of a cluster:

  • master_link_status seems to say "up" all the time, even when it isn't
  • master_last_io_seconds_ago is relatively useless data, doesn't differentiate between replication issues and master not doing anything (and only has 1s granularity, which is useless)

That's it. Nothing else.

Replication is one-direction

Something I would absolutely love to have in Redis is master-master replication. Imagine if you could set up two servers and write to either of them and have them become eventually-consistent. It would be like some kind of key-value nirvana! MySQL has supported this feature for 14 years (since MySQL 3.23). Unfortunately, Redis doesn't have any support for this. An instance can be a master or a slave, but never both. And there's no reconciliation in the replication code anyway. Hm, maybe this belonged in the "Replication is naive" section...

High Availability

Well, Redis is so simple, at least it should be easy to make highly-available, right?

C'mon, you know the answer to that.

Of course it isn't.

As discussed above, you can't have a cluster of eventually-consistent Redisen. That right there rules out the HA strategy commonly employed by key-value stores of just having a lot of them.

Well, at least we could have a single master and a bunch of read slaves, and then promote one quickly to be master, right? No, wrong. Since there is no exposure of replication coordinates by Redis, there's no way to know which of the read slaves has the latest data, so there's no way to know which on to promote.

Well, okay, at least you can use a sharding redis proxy like twitter's twemproxy to distribute your data to lots of redis masters, and if one of them goes down, you only lose 1/n of your keys, right? Well, sort of. Twemproxy fails to support an absolutely stupefying 42 of redis's 98 commands.4 Some of these make sense, but the fact that twemproxy kills your connection if you issue a PING is just madness (and, in fact, is ticketed). Twemproxy specifically has other issues; my favorite is issue #6, which is that you can't change the twemproxy config file without downtime (which I've built a horrible/hilarious workaround for at Uber). Real A-class software in this Redis community

Redis has an unstable tool called sentinel which is supposed to fix some of these issues by managing slaving across a redis cluster for you. As far as I can tell in my limited experimentation, all it can do is detect some kinds of redis failures and change what the master is, at the cost of running yet another implementation of byzantine agreement. Of course, it still requires that you run either nonexistent sentinel-aware clients (which add a bunch of new, exciting failure modes to your application), or that you manage failover out-of-band using keepalived or carp. Which seems to sort of completely invalidate the point of having an application to manage clusters for you.

Other Gotchas

Redis has a parameter called maxmemory in its config file. Do you know what redis will do by default when it hits maxmemory? Absolutely nothing if you're using redis as a database. The default behavior is volatile-lru, which means that when it hits maxmemory, redis will examine keys with an expiration time (keys set with SETX) and LRU out some of them. It won't look at any non-expiring keys (which most of yours will be if you're using redis as a database), it won't consider objects by size, and it won't raise any errors. There are two sane choices for the maxmemory-policy option:

  • allkeys-lru: Redis will, using its lossy LRU algorithm, choose to delete a key from your entire keyspace to delete when it gets towards the upper boundaries of memory
  • noeviction: Redis will return a "too much memory" error on writes. WHY GOD WHY IS THIS NOT THE DEFAULT!?

Redis gets hella fragmented. Unlike traditional databases which know how big a tuple/row is and can allocate memory reasonably, redis has to rely on similar slab allocation algorithms to memcached. This means that over regular use, it will get highly fragmented, and while it may only have 1GB of data in it, it might be taking up 10GB of RAM (which means 20GB of RAM when you're RDBing). This data is exposed in INFO, thankfully. Unfortunately, redis has no internal "cleanup" routines to reduce fragmentation — your only option is to take an RDB dump and then restart redis. Doing this because of an on-call page this morning, I freed up about 10GB of RAM on one of our clusters. It's sort of shameful that you have to be aware of this and willing to re-jigger your replication topology every few weeks just to prevent death by fragmentation, but what else do you expect?

We had an awesome issue at work because some user accidentally issued the command FLUSHALL to a production redis box. Why did a regular consumer have the ability to do that? Because Redis doesn't have any concept of permissions. The solution was to uses redis's (non-runtime-alterable) RENAME-COMMAND operation to rename the FLUSHALL command to something that the client wouldn't know about. That's somewhat like doing mv /bin/rm /bin/nobody-will-ever-guess-this-name as a way to fix the security of your Unix box where everyone has root.

Redis is inconsistent. The pidfile parameter take a full path, but the dumpfile parameter takes a relative path based on the dir parameter. The parameter for the AOF filename (appendfilename) does not appear anywhere in the documentation5. You have to read config.c yourself to know what it is.

Redis is stupid. It traps SIGSEGV and overrides it to write an error to the log and longjmp back to where it was. I have no other words for that behavior.

Wrapping up

Obviously this is a lot of gripes. I want to emphasize that if you use redis as intended (as a slightly-persistent, not-HA cacache), it's great. Unfortunately, more and more shops seem to be thinking that Redis is a full-service database and, as someone who's had to spend an inordinate amount of time maintaining such a setup, it's not. If you're writing software and you're thinking "hey, it would be easy to just put a SET key value in this code and be done," please reconsider. There are lots of great products out there that are better for the overwhelming majority of use cases.

Well, that was cathartic. I look forward to your response flames, Internet.

1

Actually, you can write lua scripts that run on the server-side, but they add so much maintenance headache that I can't responsibly recommend that.

2

With a 20GB database which you write to at 50Mbps and which is backed by a RAID1 of nice Seagate ST300MP0034 SAS drives (which are rated for 228MBps and should be able to achieve something between 1/3 and 1/2 of that in practice), when an RDB snapshot occurs, it's going to take 89 seconds to write the data in the absolute best case, and 179 seconds in what I would consider a reasonable case. That comes out to 1.07GB of extra data that's going to be written into the databse while the RDB is in progress. If that data happens to be sparse (and causes a large number of 4KB pages to be COW'd), you're looking at many GB of extra RAM that you need.

3

Actually, redis just makes a new AOF by forking and writing the contents of its current memory as an AOF. It never processes the old one on disk at all. So this process has the same RAM downside as RDBs!

4

The full list: //files.roguelazer.com/twemproxy_unsupported_commands.txt. Issuing any of these will cause twemproxy to close your TCP connection abruptly. COOL!

Serious question about urban planning policy

Skye retweeted an article today which made me realize that I really don't understand something: what do people who are profoundly anti-gentrification want? The argument that I see usually goes like this:

  1. Rich people are moving into a traditionally mixed neighborhood
  2. The big spike in demand drastically drives up rent
  3. "Normal" folk can't afford to live there (usually "normal" is defined as "poor and racially diverse", sometimes it's instead defined as "people who've lived here longer than these whippersnappers")
  4. This is bad

I generally agree that a lack of diversity is bad but, uh, what's would society do instead?

  • Is the implied message rich people should stick to their own neighborhoods and leave us alone? If so, isn't that actually, uh, even worse in terms of social stratification? That seems worse...
  • Is "anti-gentrification" really just a slightly less blunt way to say "classist"? Would people who protest against gentrification prefer that there just weren't rich (or, in the case of just about all the tech employees I know who get yelled at, slightly above the San Francisco median income) people and that all of that money was being redirected to existing city residents?
  • Is the primary request that the nouveau riche give back to their communities more? What would that entail, ideally? Is it more a question of civic engagement or of financial contribution?
  • Some sources seem to indicate that it's just a desire for more affordable housing development in existing space, but what does that mean? In a fixed-size city (particularly one like SF where it's not feasible to build upwards), housing is largely a zero-sum game. Do people just want larger cities? Because I've lived in LA county, and if you think that communities get better when they start to sprawl out, you're crazy-sauce.

I really don't know. I understand the anger that someone would have at no longer being able to afford their homes, but I also understand that there are way, way more people and way, way more jobs than there were 20 years ago in the same 49 square miles of San Francisco, and I don't know what people think the right "fix" for that is. I can't really imagine protesting something when I didn't have any idea on how to make it better because that's just unproductive and incoherent, so I imagine there are plans.

I imagine this could be a hella-inflamatory post, but a lot of the time I read (and see) things that seem to be arguing that I literally do not have a right to live in the city, and that stings a bit. I figure that I have enough people who might see this link that I might be sent something interesting. Feel free to send me any (preferably coherent) links via comments on this post, Facebook, Twitter, ADN, or whatever. If I get good ones, I'll write a follow-up post with what I've learned.

[update 2013-07-26T11:25-0700]

A response that I've gotten a couple of times so far:

  • A big part of the argument against gentrification is about changing "character", not just about economic disfortune. That seems really subjective, especially since "character" isn't fixed.

A few of the things that I've heard as mitigations that I don't think are super-effective:

  • Rent control
    • Pros: Keeps people in their homes. Fairly easy to understand.
    • Cons: Makes it extremely hard for people to move. Provides a perverse incentive to landlords to evict people rather than working to find a mutually-equitable rent.
  • Affordable housing requirements (often Section 8) in new construction
    • Pros: ensures economically-diverse residents
    • Cons: only applies to new construction; often only helps very-low-income people, but doesn't specifically help with economic spectrum diversity, racial diversity, or other issues
  • Denser building
    • Pros: More units means more room for everyone
    • Cons: Skyscrapers hurt neighborhood cohesion at least as much as demographic changes. Architecturally and politically difficult in a lot of areas (although maybe it's all NIMBYism)?

An anecdote about rent control: as of the 2007-2011 Census ACS, the median gross rent in my zip code was $859±64. When I was looking for housing in 2010, the median asking price was much more than double that. Essentially, with rent control, rather than have everyone pay a market price of $900, some people pay $400 and some pay $2000. And I'm as much of a hypocrite as possible here, since equivalent units in my building now rent for more than $1000 per month over what I pay. I would be very interested in an economic study that tried to analyze how much rent control policies encourage higher average new-tenant rents as landlords try to keep up with rising mean per-square-foot costs.

I am still looking for articles without much success, although I did enjoy this Salon article... from 1999. It's nice to know that nothing ever changes...

BART strike remarks

This post is primarily a response to the article on the BART Strike from The Nation that seems to be making the rounds on Facebook, Twitter, and all of the other blagoblag echo chambers. I've adopted this post from a Facebook message conversation I had, so it might be a little strangely-phrased. I apologize for any inaccuracies, I do not speak for my employer, and all of that necessary prelude.

I found the Nation article on the BART strike this week frustrating and inaccurate and, because someone is wrong on the Internet, I had to write a response. The BART strike is one of the more visible bits of organized labor work in the last few years, and it makes me embarassed as a stereotypical liberal that those defending to it are doing such a bad job. If the union is striking for more money, then say that. But don't misrepresent statistics to justify it. And if the union is striking for other reasons, then it would be lovely as a Bay Area resident and news-reader to know exactly what those reasons are. This well-disseminated article is nothing more than one-sided, poorly-researched editoralism masquerading as news.

read more