tag:varud.com,2013:/posts Markings 2021-06-10T23:20:09Z Adam Nelson tag:varud.com,2013:Post/744079 2014-09-19T19:32:18Z 2021-06-10T23:20:09Z Safaricom vs. Equity Bank

There's been quite a bit of discussion lately about Equity Bank (the largest bank in Kenya, and a real innovator) becoming an MVNO (mobile virtual network operator).  

Their goal isn't totally clear but it's obvious that mobile platforms and money are irreversibly linked nowadays.  The technology they're using is called thin-SIM from Taisys.

Safaricom and the GSMA are against the technology and are claiming that it will introduce security holes into the mobile money system.  However, the thin-SIM is no more or less of a security issue than the phone itself (over which GoK and Safaricom have absolutely no control).  Fake thin-SIMs are more of a concern than fake phones though - as they would be much more frequently shared (and for which it would be much more cheap to make malicious versions).

Nonetheless, I am wholeheartedly on the side of a light regulatory touch.  It will take years for any security flaws to become a systemic risk and that's what the Government of Kenya (GoK) should be worried about.  Newer technology or simple barriers (i.e. 100k Kshs maximum account balance) can blunt fraud and if it remains under a few percent of overall transactions (like cash, credit card, and mpesa already deal with), then it's ok.  It might be forward thinking of GoK to clarify that Equity bank is liable for many fraud scenarios via legislation or regulation, but that's it.  I suspect that Equity will voluntarily absorb most fraud the same way credit card companies like Visa and Mastercard took on liability for online fraud in the early days of eCommerce in order to boost usage.  Both companies have done extremely well and absorb the costs of fraud - and the profits from a comfortable userbase.

The best solution is for GoK to discuss the issues but not to pass restrictions that neuter innovation in the economy - which is one of Kenya's strong suits and hopefully won't be abandoned anytime soon.
]]>
Adam Nelson
tag:varud.com,2013:Post/714650 2014-07-16T11:13:10Z 2018-02-18T06:06:15Z Should the Nairobi County website be hosted on Kili?

[ The following text is an excerpt from an email written to the Skunkworks mailing list - a list for Kenyan technologists]

Nairobi County and any government agency should use whatever services allow them to get the most value for their money.  That is taxpayer money and it should be spent wisely.  I'm no fan of governments blindly buying local and Amazon has an extremely good product.  I'm actually really happy that they're on Amazon and not on GoDaddy or something low grade like that.  I'm also very happy that they're on Amazon and not one of the entrenched local vendors with terrible service and terrible value.  The choice of Amazon shows that the tech people who implemented the Nairobi County site have some chops.


However, there is no county or agency budget that is so thin that a difference of a few thousand shillings a year for better hosting isn't worth the extra money.  I can guarantee that the weekly airtime allotment for a high level Nairobi county employee is more than the annual hosting bill for this website - no matter where it's hosted.  Developers must demand top tools for their work when it comes to paid projects.  I understand that if it's just a hobby one wants to get the cheapest possible solution - but when it comes to a professional site, hosting is going to cost somewhere between 2-5% of the annual budget.  For a one-off project, hosting is probably going to cost 100k Kshs ( 15k for the hosting provider and 85k for the people to do all the security updates, monitor the site, make sure updates are made to support new browsers, etc...).  And then of course, for really complex websites, you're talking about $1M/year in operations costs and on up from there.  Hosting is not where you want to be cheap because the entire cost is under 5% of the budget and because the right choice with hosting can really have a significant impact on your metrics.

Of course, there are cheaper options.  For instance our main homepage is totally free because it's hosted on Github.

Here's the site:  http://kili.io

But it's not 'free' really.  James and myself at Kili keep that thing running - and we consume money.  Github does the hard work but we make sure that the copy is up to date, that it renders well, etc...

There are three reasons to consider a local host.

First off, you might have trouble even being able to pay for Amazon.  What if you're a local SME that wants to pay with direct wire, or you're a developer without a credit card.  No problem, Kili accepts both wire transfers and MPesa.

Second, what about data?  At some point, Nairobi County is going to want to allow citizens to see information about tax payments on a plot of land.  Is it really smart to keep that data in Europe where about a thousand different entities can sue to access that data?  It's probably safer in Kenya where Nairobi County has a stronger ability to manage the data as they see fit.  When we're talking about health and financial data this becomes pretty serious.

Third, let's talk about latency.  Within Nairobi, it's not 50ms to reach the Kili host, it's 10ms.  To reach the Nation, it's 140ms.  People say that nobody notices milliseconds but what about when a website has more than 100 HTTP requests, and they block because browsers will only do so many at a time.  What about streaming - which really suffers?  The Nation takes 6.65 seconds for the DOM to be available and 7.26 seconds for the site to be fully loaded.  That's not including async stuff that happens afterwards.  There are over 200 requests by that time.  For them to host on Kili would drop the site load time down to under 1 second and probably boost the number of page views by twofold.  Page views are how media companies make money.

This third point should not be lost on the government side either.  It's obvious that local media business will make more money once they move locally and the managers who make that happen will DEFINITELY be rewarded with promotions.  The government sites don't make money, so who cares?  That misses the point because civil servants and consultants who build and maintain those sites can improve their careers immensely.  You don't think that if the person in charge of that site, when he or she shows Kidero that page views went up 30/40/50/100% won't get rewarded for that?  Those are voters and that website is the platform by which elected politicians and their staff get their message out and advance in their own career.  This stuff really matters and it's not just milliseconds - it's votes and agendas.  

Anyway, this has been a long post and I just wanted to thank everybody for reading it.  Since I do know some of you are using VPSes on unpaid projects with minimal budgets, or are students, we're offering a very special deal.  Anybody who signs up (click 'Sign Up' on http://kili.io) and tops up with $15 (Credit Card) or 1,200 Kshs (Mpesa) will get a $100 credit.  All you have to do is send me an email to request the credit and ideally point out one thing you would like us to change about Kili.  This is as cheap as it gets because we know that when people see how much better it is to be local, they'll never go back.

]]>
Adam Nelson
tag:varud.com,2013:Post/656859 2014-02-21T13:46:13Z 2018-02-18T06:06:15Z Stable, Flexible, Extensible, Maintainable, Scalable

This is a pretty good article to check out (or just read my summary below):

http://enterpriseprogrammer.com/2013/06/15/sfems-stable-flexible-extensible-maintainable-scalable/

I don't agree with programming with a 20 year lifespan in mind, but people should definitely think in terms of code being around for 2-3 years and modified by 4-5 future developers that you have not met during that time.
  • Stable - Unexpected inputs should lead to simple errors rather than cascading failures.
  • Flexible - Lower level code should be flexible enough to accept some changes in higher level code without modification.
  • Extensible - Interfaces should be written to be extended in the future, not with the presumption that there won't be any change in the future.
  • Maintainable - Code should be clear and precise so that future developers can change methods without fear of breaking existing functionality.  The corollary here is that automated regression tests should be part of the codebase.
  • Scalable - Logic should be able to scale.  It doesn't matter how fast an action is per se - it matters how scalable the action is with regards to the frequency with which it will run and how that impacts the rest of the system.
]]>
Adam Nelson
tag:varud.com,2013:Post/655880 2014-02-19T06:05:05Z 2018-02-18T06:06:15Z Laptop production in Kenya

There's been an ongoing discussion over the past year about getting one laptop to every child in Kenya and how the ICT industry here thinks about that.  Many people feel that, to the degree that this is a good idea at all, laptops should be assembled in Kenya.  I don't agree and below is a summary of the thoughts I posted to that group.

Supporting industry and helping kids with a final product are two independent things.  The more money that goes into spinning up manufacturing capacity, the less money that goes into getting the technology to the kids.  Kenya can't magically produce laptops cheaper than China can.


Kenya has no chance of having a meaningful laptop assembly capacity because it doesn't have the economies of scale that East Asia has.  Europe and the US are giving lots of technology to their children and none of that stuff is produced in-country because manufacturing plants can't exist in isolation.  

A laptop assembly plant is just one of dozens of plants (chemical manufacturing, plastic-shaping, aluminum foundries, LED, etc...) needed in close proximity to each other just to create the first laptop.  Having a laptop assembly plant in Kenya and all the preceding plants stay in China isn't economically viable.  And also, if the plant is only creating a few million laptops, it's doubly not viable.  It has to produce more like 10M/year and in order to do that and so the plants would need to export those laptops.  Where are these laptops going to be exported to and how?  Is a typical Rwandan going to buy a Kenyan laptop over a Chinese one?  Maybe, just maybe, with a solid $5-$10B of pure investment Kenya could get a real industry going but then to what end?  Computer manufacturing has already plateaued (currently one computer produced for every 20 people each year) and it's agreed that future growth will happen in tablets and mobiles where most of the value is in commodities and intellectual property, not assembly line labor.  Tablet sales are already 60% of computer sales and the industry is seeing 50% YoY growth.

Kenya has all the raw ingredients to leapfrog manufacturing and go straight to a knowledge economy - it just needs to invest deeply in its children through strong, universal education.  Having young people working on assembly lines is not a way to empower youth.

Editor's Note: A previous version of this post said 'South East Asia'.  I meant to say 'East Asia'.
]]>
Adam Nelson
tag:varud.com,2013:Post/646471 2014-02-17T13:59:11Z 2018-02-18T06:06:14Z Thoughts on the the new draft rules for dot KE published by the Communications Commission of Kenya (CCK)

As a consequence of the Kenya Communications Act, CCK has drafted a new regime for administering the apex .ke country-code top-level domain (ccTLD). It is still not clear if the CCK has an opinion on arbitrary second-level domain names under .ke from any of the documents, but in addition to a new license for the apex registry (.ke itself) and second-level domain registry (.co.ke), there are also licensing documents for registrars who will be allowed to lease domains to the general public (example.co.ke and possibly example.ke).  

For those who are unaware, registries take on core responsibilities around rules and regulations regarding domains and registries need to be held to very high standards.  Registrars are merely sales organizations that resell (or really, lease) domains from the registry.  Typically, a registry does not sell to the general public and a registrar has no core maintenance or administration role with regards to the domains it sells.

As a background, [generic top-level domains (gTLDs) include .com, .org, .mil, .net, etc...  In addition to the generic domains, each country has a TLD referred to as a country-code TLD (ccTLD): .uk, .ke, .de.  There are also now internationalized domains (IDNs) such as .中国 and .香港.

Thank you to CCK

First off, we should all be thankful to CCK for having a public comment period at all.  It's very atypical for Kenyan government departments to give time for proper feedback.  Sometimes one sees gazetted rules in the newspapers dated one-month back so that they go into effect immediately.  CCK really deserves a round of applause for recognizing that releasing rules in that way is less than ideal.

What's out there?

Visiting Gandi (a registrar, not a registry) to see what TLDs are broadly available is a great way of understanding the current topology of domains globally - and which ones are well represented around the world. Unfortunately, CCK rules continue to deny companies like Gandi from reselling the .ke TLD and therefore, the number of registrations is very low.

Let's analyze some other ccTLDs

Norfolk Island

Norfolk Island (a very small semi-autonomous territory east of Australia) has .nf and allows those domain names to be registered for $1,420 and renewed for $230/year thereafter.  Large multinationals like Google seem to pay up for "google" under any TLD so this is probably a good strategy for a very small country to capitalize on its ccTLD when there is no chance of broad adoption.

Barbados

Barbados maintains a semi-restricted ccTLD, .bb and has very little uptake - seemingly intentionally.  The rationale for this appears to be that Barbados wants to assert strong authority over the domain name so that end-users know that hosts have met certain criteria in order to have been able to leverage the .bb ccTLD at all. The goal is not to have mass adoption .bb domains in this scenario.

Austria

Austria has a long history with its TLD (.at).  It is currently managed as a public-private partnership that was spun out of a loose coalition of Austrian ISPs.  They have a permissive registrar license not requiring registrars to be located in-country and therefore have many sellers of the .at domain name around the world and many registrants.

Thoughts on the CCK Proposal

It is out of the scope of this article to discuss all of what CCK is trying to change with these documents but I'd like to discuss some obvious points:

Arbitrary Second level domains

Although some ccTLDs (.uk) do not allow second-level domains (like bbc.uk), many have moved to allowing second-level domains (like donteat.at).  Second level domains can sell well (ca.ke might work or would it be a flu.ke?) and earn money for the top-level registry.  They also overcome the often meaningless divisions between companies (.co.ke), organizations (.or.ke) and 'others'.  CCK has not made it clear what the future is for second level domains under .ke, but this needs to be a discussion and I would strongly favor second-level domains for .ke in order to increase adoption.  Note that allowing domains like 'example.ke' does not mean that 'example.co.ke' or 'example.go.ke' couldn't continue to exist.

Foreign ownership of registrars

Registrars act as an intermediary between the registry (the organization that administers the definitive database of all .ke domains, handles complaints, enforces regulations, etc...) and the domain holder.  Typically, an end-user leases a domain name from a registrar on a renewable annual basis and the registry administers the database.  Unlike many other countries, CCK will require that all registrars be Kenyan-owned. 

Although it's totally reasonable that the dot KE registry be explicitly under Kenyan authority, I don't think it's in the best interest of the industry to force registrars to also be locally domiciled. It may be good for a narrow group of local registrars to protect their business registering domains, it is bad for boosting the overall number of .ke domains registrered.  The vast majority of domain registrars globally are non-Kenyan and excluding them from being able to lease the domains means that .ke will continue to be a niche asset.  Kenya uses foreign contractors for many specialized activities (building airports, laying roads, improving container ports) and there is no reason to treat potential .ke registrars any differently - especially when the cost is impact is a continuing low uptake of the .ke TLD.

Officious requirements

In addition to the exclusion of most of the global registrar industry from participating, the "APPLICATION FOR DOT KE DOMAIN NAME REGISTRY SERVICES PROVIDER AND DOT KE SUBDOMAIN NAME REGISTRAR SERVICES PROVIDERS UNDER THE UNIFIED LICENSING FRAMEWORK" document places myriad burdens on even local registrars including the need to:

  • acquire sworn affidavits
  • pay by check without accepting credit card or mPesa
  • submit paperwork in-person rather than via mail (or better, via electronic means)

Note: It's been pointed out to me that my determination might be wrong.  This document may only refer to registries (although it seems very clear that it's referring to registrars).  I really can't say for sure and we would all be grateful if the CCK would clarify what it means here.

Intrusive requirements

Although the officious application requirements are very bad for the uptake of the .ke TLD, many of the requirements are also intrusive.  It makes sense for the main registry provider of .ke to be rigorously vetted and part of a public bidding process, but it is counterproductive to place the same high bar in front of the hundreds of registrars who should instead be welcomed into the program in order to sell more .ke domains.  Requirements that should be stricken are:

  • The need to supply a 3 year business plan
  • The need to apply Articles of Association (or equivalent)
  • Notarized share certificates (this particular clause is also unclear - what exactly does CCK even want?)

Again, the documents are confusing so it's not really clear what CCK intends.

Summary

The research by CCK vis-a-vis the regulation and administration of ccTLDs globally as evidenced in these documents appears pretty minimal.  Most of the background appears to come from other Kenyan law rather than other Internet governance regulation.  I would hope that those who drafted the text read at least five or six sets of comparable documents from other countries - of which there are over 100.  This is something which can easily be addressed and is well worth the time of the CCK researchers to not only read these regulations, but to reference them in an executive summary that should be attached to these drafts.

]]>
Adam Nelson
tag:varud.com,2013:Post/646507 2014-01-27T11:47:39Z 2018-02-18T06:06:14Z Security needs to be in layers.

I recently responded to a query about security in the cloud and whether certain security-conscious apps should be deployed on an IaaS layer in East Africa.  Here is my response:

If an organization can afford and currently implements strong physical security, low-level network security (intrusion detection, stateful layer 3 firewall, etc...) and kernel-level OS security, and none of those functions come at the expense of high-level (OS and application) security, then you may well be better off with brick & mortar.

However, I doubt that any local group aside from national security bodies in Kenya and Rwanda have that capacity and the rest of the organizations will have better security from using a cloud-based infrastructure solution.

There is no way that low-level security is better at any bank or similar institution in East Africa than it is at Amazon Web Services.  

And as for high-level security (i.e. OS and application exploits), cloud providers do not purport to cover those things - it's up to the end-user to secure that level.  But, an organization that only has, let's say 2 or 3 security people, can encourage better security by leveraging a cloud infrastructure provider for protection against physical and low-level attack vectors and focusing on higher level attacks like operating system exploits and especially holes in custom-written applications.
]]>
Adam Nelson
tag:varud.com,2013:Post/641665 2014-01-14T09:13:18Z 2018-05-17T15:03:23Z Kenyanization and its affect on startups

I was recently reading this form for a work permit for non-Kenyans:

http://www.immigration.go.ke/images/downloads/form22.pdf

I think this single form sums up why Kenyan companies thus far haven't been able to be pan-African, let alone global powerhouses.

Aside from the glaring omission of anybody being non-European, non-African and non-Asian (i.e. everybody from North and South America), I noticed that the underlying thrust of the document is to make sure all companies located in Kenya are geared towards becoming more Kenyan (except of course, all the international non-governmental and diplomatic organizations which bypass this whole process).

It seems like GoK (or most of it anyway) doesn't understand that every country has to choose indiginezation of its domestic industrial sector OR allowing its industrial base to compete at a global level.  Indiginization can work in countries like Saudi Arabia where the focus is purely on resource extraction and not on building global companies - but it doesn't seem like Kenya is on that path.  Kenya seems to me to be a place of commerce where it can really take advantage of trade with other countries much like Singapore, Hong Kong, Thailand, etc... have done in South East Asia.

You'll notice that the largest Kenyan bank is KCB as the 59th largest bank in Africa.  Who knows how small KCB is when compared to the global field.  

http://www.theafricareport.com/Top-200-Banks/top-200-banks-2013.html

There is no natural reason for Kenyan banks to be so low on the list.  Kenya is one of the major economies on the continent, it has an educated work force, it has a large domestic market with which to nurture companies - yet there is no way for a Kenyan company to scale because of the insularity of the immigration regime.

Is there any impetus within the government to address this problem?  I'm concerned about what my plan B is as a startup trying to be pan-African with a headquarters in Nairobi.  When I first moved to Nairobi, I thought this was a global city like New York where I had worked with Indians, Swedes, Burmese, Brazilians, Americans.

Here I'm friends with people from all over the world but they all work for the UN and their employers bypass the GoK which otherwise fights so hard to keep foreigners from working and building businesses here.

Is there a solution or is it just going to get worse?  People I've spoken to say 'come to Rwanda' or 'come to Mauritius' or even 'come to Ghana' .... can a company have a headquarters in Kenya and run a tech company with pan-African and global ambitions?
]]>
Adam Nelson
tag:varud.com,2013:Post/606002 2013-10-14T10:02:54Z 2018-02-18T06:06:14Z I can tell you a bit about Kili by telling you a bit about myself.

Today somebody asked me to tell them more about Kili, the public cloud we're trying to build in Kenya.  I said that I can tell them a bit about Kili by telling them a bit about myself.

I moved to Nairobi in January after leaving as CTO of Yipit, a NY-based startup, a few months before.  Over 2 years, we had grown the company from the two that had founded it to 25 people.  By the middle of last year, myself and the co-founders no longer agreed on our future together so we parted ways.  

At about the same time, my wife, who works for Columbia University, had been coming to East Africa for a number of years as the manager of regional projects focused on climate and public health and was able to transition from being NY-based to Nairobi-based.  I had been to Nairobi twice before with her and had talked to startup people on those visits and so we both decided that the time had come to make the move.

After arriving in January, I spent my days speaking to many of the startups at iHub (the local startup spot) and elsewhere and talked through the different ideas that I might be interested in exploring more deeply.  For instance, taxis are notoriously poorly organized in Nairobi and I thought about how to fix that for a while - and concluded that even if solutions were found, they would be hard to scale to other cities and anyway, the market simply isn't that big.  Taxi drivers don't get paid that much and downtime costs for parked cars are pretty low.  The most expensive part of a taxi ride is probably the wear and tear on the car as it drives, and the gas.  Solving this problem, while good for some people in the city, simply won't generate large quantities of money.

Something similar is true with e-commerce.  There is nothing like Amazon here and it's a real frustration for users who are used to the service (aside from State Department employees who get their stuff freight-forwarded from the States courtesy of Uncle Sam).  However, executing the right solution will be very difficult to do and even then, it's not clear how to scale beyond Nairobi.  One reason that Amazon works well is that it has 10x the number of products available as Walmart does - so aside from simplicity, users have more choice.  In Nairobi, managing a warehouse with 10x the goods as Nakumatt has (the local Walmart) would be nearly impossible because that company would be the only one carrying all of those products and there simply isn't a robust enough supply chain to support that product-depth.  Security is a problem too and then there's the issue that there is no routinized mail or shipping service to residential addresses.

In addition to potential pitfalls with these models, there's the overarching problem that I'm not a typical African consumer. So, while I know a bunch about product development and technology and startups in general, it's not clear that as an expat American, I am best placed to deploy a consumer-focused startup in Africa at all.

What became amazingly clear however from talking to all of these different startups however is that each of them was desperate for high quality cloud infrastructure.  The closest AWS or Digital Ocean presence is in Europe - thousands of miles and about 150ms away by fiber.  For the most part, European and American companies don't accept local payment methods.  And finally, some of the groups had regulatory concerns about their financial and health data being out-of-country.  The need was high for a local provider of these services.

While not being the perfect person to handle developing a local consumer app, I am in the perfect position to supply those companies with modern public cloud infrastructure.  A number of years before Yipit, I had been the CTO of a large website in NY during the Web 1.0 era, Forsalebyowner.com.  In those days, we would get servers by FedEx, set them up, drive them to the colo facility, and spin them up.  At Yipit, I architected a large public cloud installation.  In many ways, I realized that I was one of the few people in the region with the right background to launch a public cloud and in addition, the market was clearly in need of one.

That's where Kili comes in.

Kili is Amazon Web Services for African markets.  


]]>
Adam Nelson
tag:varud.com,2013:Post/601932 2013-09-17T10:41:10Z 2018-02-18T06:06:14Z The Retrospective Meeting

Three Types of Meetings

Agile development requires a couple of different meetings in order to get things done.  Most people know about the daily standup meeting where all the members of the team stand and one at a time say what they did the day before and what they plan to do that day.  Ideally you pass around a token (conch shell anyone?) and only the person holding the token can speak.

The other important meeting is called 'estimation'.  This is held every sprint (week or two) and it's when cards are estimated by the people who will likely do them.  Only those people can estimate (usually with point values of 1, 2, 3 and maybe 5).

Both of these meetings are absolutely critical to a team's success in the agile scenario.

There is a third meeting that some groups forget about but which is no less important.  In fact, I would say it's the most important meeting and it can even be used without the rest of the process on a non-development project (for instance with a sales team).

What is the Retrospective Meeting?

StreetogrOFFY - "Mini-me"

First of all, get everyone in the room or simply commandeer the entire office.  This is a group event and nobody on the team should be missing it (unless they're busy playing Starcraft of course).  Don't be lame and exclude certain staff.  If you don't value their input enough to have them in the meeting, let them go entirely.  If there's a senior person who doesn't even want to come to the first meeting - know in your heart that that person doesn't value you and start looking for another organization to work at or start thinking long term about ejecting that senior figure from the group.  Hiring and firing is an important part of all organizations - face it head on.

There is an exception to the 'invite everybody' rule and that's if an invitee might make other people feel 'unsafe' and otherwise keep members of the group from speaking up.  This might happen if you're a consultant called in to fix something.  If you're consulting, they've called you in to fix the problem.  Don't invite the toxic character but do make sure he or she knows the outcome of the retrospective meeting and work with them to fix the problem.  Hopefully you can fix the situation and earn your keep - or you can at least identify the problems for upper management who can then make tough decisions. If this is your team, this situation is really bad.  

Anyway, once everybody is together, let them know this will take 1 hour (1:15 if it's 10+ people and it's the first time).  Don't let meetings go more than 5 minutes over time - you really don't deserve the respect of that person who didn't come if you can't even end the meeting on time.

Cellphones and computers off.

Supplies:

* Post-it notes or index cards with tack/putty/magnets to stick them on a whiteboard.

* Bunch of markers to write on the cards.

Let's get started

Make sure to start things off by asking about safety and make sure that everyone is comfortable enough to speak.  This is the organizer's responsibility. Everybody gets a stack of cards or block of post-it notes.

What Worked? What Didn't Work?

On each card, participants write about the previous 2-3 weeks (every 2-3 weeks is a good cycle for these meetings) with "things that worked" or "things that didn't work".  Let them brainstorm for a solid 10 minutes or until there's not much else to write.  When people have written all their cards, they all get put on the board either on the left side reserved for "things that worked", and the right side for "things that didn't work".

Some people add a 'puzzle' section on the right side of the board for things that people didn't understand (i.e. what's that giant box that got delivered last week and is sitting in the corner of the room shaking every couple of minutes?).

Group the Cards

The organizer then reads the cards out loud one by one.  If one is similar to another that has already been read, he puts them together in a cluster on the board.  If the organizer needs clarification, she asks whoever wrote the card to offer more details to the group.

When all the cards are grouped on the board, each person gets a chance to vote on the cards they think are important by adding a dot.  Each person gets a total of 3 votes (feel free to put 3 votes on one card).

It's nice to vote for "things that worked" but over time, people are going to focus on "things that didn't work" because those are the things people want to make better.

Counting

Now the organizer takes the top 3-5 cards and moves them to the upper right side of the board in order.  Only the top 3 are typically discussed but sometimes there's time to talk about #4 or even #5 so it's best to just move them to the upper right as well if the vote count is close.  

Brief Discussion

Starting with the top, you open the floor to discuss each item one by one.  Give about 5-7 minutes to explain what didn't work and why it matters.  At the end of the discussion (shut it down if it's taking too long), ask for a volunteer to be the point person to address the problem ahead of the next retrospective meeting.  The volunteer DOES NOT HAVE TO FIX whatever didn't work.  The volunteer just has to move forward with putting together the solution to the issue: getting the right people together, finding out how to fix it, assigning people to fix it, or of course, just fixing it.

Next Session

Each session gets a bit easier and smoother and this is when the point people assigned to unwind (or at least start to unwind) the things that didn't work from the previous session get to talk about their progress.  If progress isn't sufficient, the person keeps at it (or maybe somebody more senior takes over) until there's some sort of resolution.  Not everything always gets fixed but the team sees how they were able to push forward problems and get them addressed - and that's an amazing step forward for most organizations

Conclusion

Just do it!  If after you read this article, you're worried about buy-in and getting people on board, let the powers that be know that this is a great way to build a better organization and that the staff will really respond positively.  And as with all things in the agile process, don't think too hard about it - just do it.  You can always say "that was stupid" and never do one again.  If a group can't try something just once to see if it works, it won't grow and be able to innovate.  The capacity to experiment and innovate, sine qua non.

]]>
Adam Nelson
tag:varud.com,2013:Post/603805 2010-11-28T20:20:11Z 2018-02-18T06:06:14Z Pay to Play VC events

I just got another one of these VC email solicitations for an 'exclusive event'.  At least youngStartup has the transparency to tell me upfront what the presentation fee is, but I find it disturbing that for $1,500 I can buy a "Top Innovator" award.  Varud.com has only ever been a blog.  About 2 years ago I worked on an idea for a social network based around future events and their outcomes - but it never materialized.  It certainly doesn't merit an award.  I'm surprised that people from Bain, etc.. could get suckered into attending an event like this.  Obviously, the best ideas won't be presenting here, only the ones with money to play.

Don't get me wrong, established companies pay to present all the time.  That's part of how they market.  However, a startup spending $1,500 for something like this either doesn't know what it's doing (much better to spend that money on your product or more strategic marketing) or isn't well run - or it's already well-funded.  Jason Calcanis has already written about this extensively.  His message is that there are plenty of free ways to get an audience for your ideas.  The most important thing is for you to have a partner whose skills complement your own and the time and money to work on your project.  If you don't have the partner, worry about that first.  If you have the partner and not the time and money, make your pitch awesome and then start pitching to whoever will listen - for free.  Just getting in front of user groups and meetups is a huge step.  At the same time, work on your product.  If, after 6 months, you're not getting any traction, step back and reevaluate everything from the ground up.

Note: In order to not make this an ad hominem attack, I've taken out the sender's name.

Hey Adam - Let me know if you'd like to have Varud. recognized as one of 60 Top Innovators presenting at The 2010 New England Venture Summit being held on December 14-15 at the Hilton in Boston Dedham.

As you may know, this exclusive venture capital summit will bring together over 450 VCs, Corporate VCs, private investors, investment bankers and CEOs of emerging companies and will feature high-level networking, face 2 face meetings and over 40 VCs on interactive panels.

Partial list of VCs confirmed to speak includes:

Neeraj Agrawal, General Partner,  Battery Ventures | Omar Amirana, MD, Partner, Oxford Biosciences | Christian Bailey, General Partner, incTANK Ventures | Michael Balmuth, General Partner, Edison Ventures | Dr. Jonathan Behr,  Principal, PureTech Ventures | Tom Cain, Special General Partner, Sail Venture Partners | Carlos Cesta, Director, Verizon VC | Jon Chait, Partner, Dace Ventures | Andrew D. Clapp, Managing Partner, Arctaris Capital | Issam Dairanieh, US Director, BP Alternative Energy Ventures | Ohad Finkelstein, Partner, Venrock | Patrick J. Fortune, Ph. D., Partner, Boston Millennia Partners | Liron Gitig, Principal, FTV Capital | Jeffrey Glass, Managing Director, Bain Capital Ventures | Greg Kats, Managing Director, Good Energies | Alan J. Koenning, Fund Manager, UPS Strategic Enterprise Fund | Venetia Kontogouris, Managing Director, Trident Capital | John Lawrence, Partner &  CFO, Longworth Venture Partners | David J. Martirano, Co-Founder and General Partner, Point Judith Capital | Chuck McDermott, General Partner, Rockport Capital Partners | Robert McNeil, Ph.D., Managing Director, Sanderling Ventures | Jeffrey B. Moore, Vice President, MP Healthcare Venture Management | Ira Nydick, Senior Technology Analyst, Panasonic Venture Group | Brendan O’Leary, General Partner, Prism VentureWorks | Patrick O'Neill, P.E., Investment Associate, Connecticut Innovations | Joseph Riley, Managing Member, Psilos Group Managers | Praveen Sahay, Founder & Managing Director, WAVE Equity Partners | Gavin B. Samuels, M.D., MBA, Senior Partnering Director, Teva Innovative Ventures | Bart Stuck, Managing Director, Signal Lake | Steven St. Peter, Managing Director, MPM Capital | Scott Requadt, Transactional Partner, Clarus Ventures | Chris Risley, Operating Partner, Bessemer Venture Partners | Jake Tarr, Managing Director, Kinetic Ventures | Louis A. Toth, Senior Managing Director, Comcast Interactive Capital | Markus Thill, Managing Director, Robert Bosch Venture Capital | Tracy S. Warren, General Partner, Battelle Ventures | Tom Whiteaker, AVP, Hartford Ventures | Caleb Winder, Vice President, Excel Venture Management | Bilal Zuberi, Principal, General Catalyst Partners and many more.

Our screening committee is busy reviewing the submissions and will be selecting the remaining companies by this coming Monday.  If interested, I would need you to fill out the summary outline and submit by the end of Monday.

I've included the details of the opportunity - let me know if you'd like me to send over the summary outline.

Featured Company Benefits include:

    * Recognition as a Top Innovator 60 company

    * Access to leading VCs, Corporate VCs, private investors and investment bankers

    * Presentation slot

    * Three Complimentary passes for company executives

    * Additional discounted registrations

    * Two page Company Profile published in event guide distributed to all attendees and investors

    * Media Exposure

    * Two complimentary passes to attend “Featured Company” Coaching Session with active VCs providing feedback

    * Two passes to opening reception


Fee to present: $1,485 (there is no fee to apply).


Regards,

Adam


XXXXX

Senior Associate

youngStartup Ventures

Phone: XXXXX

Email: XXXXX

URL: www.youngstartup.com

]]>
Adam Nelson
tag:varud.com,2013:Post/603806 2010-03-30T18:25:12Z 2014-09-03T04:38:41Z Custom Fields on South 0.7 If you've upgraded to South 0.7, you'll notice that custom model fields are no longer supported. 

There's a long, convoluted, discussion about supporting custom fields with introspection rules, but that's unnecessary for most custom fields. If you're just extending a standard field like CharField, follow their tutorial example. 

Import: from south.modelsinspector import add_introspection_rules in your fields.py file (which should be holding the custom field MyCustomField, in my case in the util app). 

Then, at the bottom, under your field definition, put: add_introspection_rules([], ["^util.fields.MyCustomField"])
]]>
Adam Nelson
tag:varud.com,2013:Post/603807 2010-03-30T18:07:18Z 2013-10-08T17:30:29Z AckMate replaces Ack in Project
It looks like I'm behind the times. AckMate has replaced Ack in Project - time to upgrade...
]]>
Adam Nelson
tag:varud.com,2013:Post/603808 2010-03-30T18:02:25Z 2013-10-08T17:30:29Z ackrc file Great tip, add this to your ~/.ackrc file for ack

--ignore-dir=migrations 

Then, when you run Ack in Project from TextMate, you won't get hits on your migration directories - which typically aren't what you're looking for anyway.
]]>
Adam Nelson
tag:varud.com,2013:Post/603809 2010-01-22T17:23:52Z 2013-10-08T17:30:29Z Pros and Cons of MongoDB I was recently asked by somebody to answer some questions regarding MongoDB.  Unfortunately, I have yet to use it in production, but Ara, Zach and I have put it through quite a few paces at this point ... Nature of Use: Would be useful if you can mention the nature of application (for ex. reporting or analytics ?) you are using MongoDB for?
We use MongoDB for high volume logging.  After what we need is logged, we use Python/PyMongo to transform the data into chunks suitable for Postgres.  Postgres is our central data store used for our Django application and all its associated models.
What were the other NoSQL storage solutions were evaluated and why MongoDB was chosen against the others?
Cassandra was the other one that we got pretty far with.  In terms of maturity and scalability, Cassandra appeared to be the winner.  However, Cassandra has extremely limited query capabilities that weren't sufficient for us.  In addition, MongoDB has plans to focus on scalability which suited our needs fine.
Robustness: How long you have been running MongoDB in production ?
Have not run it in production yet.
Did you encounter any issues on stability front (any crashes or restart needed) ?
One issue is how best to keep it 'living' without human intervention.  So far, the tools have been very straightforward and simpler than solutions for other products.  However, we haven't tested the quality of backups under high load nor have we really pressured the system in the wild.  We architected MongoDB in our system so that we could lose it and all we would lose is incoming data while it was down, not historical data or reporting capabilities (which is ok for us for a few hours).
Performance: What has been your experience on performance side like (queries/sec for the hardware configuration being used)?
We hit 30 inserts per second on a high cpu (the lowest 64 bit) Amazon ec2 instance.  However, the bottleneck was in our Python listener, so we don't know how much higher MongoDB could go.  We suspect quite alot as the load average was under .2 during this test.
Did the performance degraded when datasize grew?
We haven't sufficiently tested this yet.
Scalability: What is the rough datasize (number of records, number of collections, size on the disk?) Mongo is being used for?
The goal is to hit 1k inserts/second with real time processing (i.e. using their upsert functionality which is something like INSERT ... ELSE UPDATE) and to hold onto 10M+ records in a collection.  If we weren't confident in that being possible, we would not have chosen MongoDB.
Does all the data sit in one MongoDB server or you are using MongoDB in a clustered environment ?. If being used in sharded environment, would like to know your experience because MongoDB does not support auto-sharding out of the box?
We are using sharding, but again, we have not pushed it to the limit.  Although it does not support auto-sharding, manually setting up a shard is pretty straightforward.  This is one of the advantages Cassandra has.
DataReplication/Persistence: Did you use data-replication in Mongo? What has been the general experience with it?
We are planning to use replication but are not.  As referenced above, we have the option of losing MongoDB for a few hours and not incurring a major business penalty.
Regarding persistence of data, did you encounter any issues given that MongoDB does lazy writes to the file system?
No, but again it has not been pushed enough for me to feel confident that this is a non-issue.  We are planning using XFS however which does have journaling to account for problems at the file block level.
Search: Did your application required text-searches on the documents stored in Mongo? Since MongoDB does not support text-search out of the box, how did you take care of search?
We aren't using full text search.  Our goal with regards to that is to setup Sphinx or something similar when we need something like that.  That seems like the right architectural solution.
Support: Regarding resolving issues related to Mongo, did you rely on the open-source community or signed up for the paid-support? What has been your experience ?
Community.
Client-side tools: Which libraries did you use talking to MongoDB server ? We have web-app to be running in Python and there are two libraries available for Python.
PyMongo.
Would be great if you can share(pointers) to client-side tools you are using with MongoDB ?
The Mongo interface is a bit chunky (the way it uses JSON for everything), so often I just use PyMongo since all of our real code uses that anyway.  Our plan is to only have a small number of collections so any necessary queries would happen through our code, not in an ad hoc way requiring a client gui or something like that.
]]>
Adam Nelson
tag:varud.com,2013:Post/603810 2010-01-13T22:35:02Z 2013-10-08T17:30:29Z Django 1.2 Alpha - Template Threading Django 1.2 Alpha 1 was recently released to developers worldwide.  I haven't been able to play around with it yet but I am reading through the announced changes and plan to write a series of articles for people making the leap from 1.1 to 1.2 - since that's what I'll be doing this Spring. 

First of all, this is a giant release.  I don't expect it to go smoothly and I can pretty much guarantee that some major third-party packages will be broken even when Django 1.2 is released as stable.  One of the major changes is that template node bytecode will now be cached in memory (I think - at least that's how I understand it).  Most people will say, 'cached is faster than not cached ... this is great'. 

Unfortunately, what this really means is that the web server will cache the code and run it across all the threads that share that process memory pool.  Now, imagine you are on a default Apache installation on a modern OS.  These days, that Apache will be running in a multithreaded configuration.  That means that each thread (end user) will hit that bytecode in a shared fashion.  If you (or a third party developer) have written any custom template tags, this can be a problem. 

Thankfully, the fantastic Django docs point this out and explain why this matters.  For the lazy, I'll reproduce version 1.1 compatible template tag code that could drive the cycle tag:
{% cycle 'row1' 'row2' %}
And the Python code behind it:
class CycleNode(Node):
    def __init__(self, cyclevars):
        self.cycle_iter = itertools.cycle(cyclevars)
    def render(self, context):
        return self.cycle_iter.next()
To take their example, if you write a tag that cycles different styles for list items, and two threads hit that tag node, you might get cycling that crosses the thread boundaries.  Typically, one client request is getting one request, and another the other.  One client in that example would get two odd styles, and the other would get two even styles - even though in Django 1.1, since the template tag node was not cached, each user would get an odd, even cycle of their styles - the expected behavior. 

What this means for people with custom tags?  In English, this is only really an issue if context can't be global.  Keep in mind that the variable passed will still be thread-safe (they are stored in the thread, not the template node code).  If you are using a template tag that depends on the context of the template at a given moment, then you need to worry and follow their advice of testing render_context, if not, it's ok. 

Keep posted for a discussion on the Messaging API next.
]]>
Adam Nelson
tag:varud.com,2013:Post/603936 2009-11-20T19:21:59Z 2013-10-08T17:30:31Z Cassandra learning I've been reading up on NoSQL databases for our new deployment.  We're down to two: Cassandra and MongoDB.  I wish I'd been more thorough about the decision to get it down to those two but suffice to say that we only got rid of the others (Voldemort, CouchDB, etc...) if they didn't support sharding, if they didn't have Python libraries, if they weren't 'mature', or for a few other specific reasons.  We didn't just deny solutions out of hand. 

I'm just focused on Cassandra right now because my colleague, Ara, is focused on MongoDB.  We will be jousting later on about which software is best. Pros:
  • Shards can handle datasets larger than the memory available (unlike Redis which can't handle more data than it has RAM).  This is a pro only in our case where we're expecting many GB of data.
  • Favors Availability and Partitioning over Consistency - although it is Eventually Consistent.
  • Fully supports replication, partitioning, self-repair, etc... without application-level logic.
  • Supports asynchronous write where the node takes the right and returns control to the client while the the node takes care of forwarding the write appropriately.  The write is logged locally for fault tolerance.
  • Data is split locally between Memtable (on RAM) and SSTable(on disk) for low latency and low volatility.
  • 'Bloom Filter' allows very fast checking of uniqueness (i.e. keys) without having to touch the Data File - for increased speed to check whether a key exists.
  • Supported Python client library maintained by Digg.
  • Write is non-blocking - no read required.
  • Writes are atomic within a ColumnFamily.
  • 'Remove' functionality uses 'Tombstones' to mark a record as ready for deletion so that deletes are asynchronous.
Cons:
  • 'Schema' changes require restarting service.
  • No Commercial support.
  • Writes are favored over reads (which is good for typical scenarios, but worth considering for some people).
  • Loss of Libido
]]>
Adam Nelson
tag:varud.com,2013:Post/603859 2009-10-21T02:37:30Z 2013-10-08T17:30:30Z Speeding up rarely used Leopard laptop I have an old G4 laptop that comes out sometimes when my girlfriend fiance is using the shiny new MacBook Air.  

Unfortunately, every time I use it, the hard drive thrashes and the CPU goes to 90%.  This is not good when you're simply trying to read nytimes.com

What you'll probably notice if you look at the activity monitor or the output of ps from the terminal is the locate command running.  This is trying to update the locate database - which makes running 'locate' from the command line faster and I'm presuming is the backend for Spotlight? 

Just pop into the terminal and from /etc/periodic run:

sudo mv weekly/310.locate monthly/ 
sudo mv /etc/periodic/weekly/310.locate /etc/periodic/monthly/ 

Now, locate will only run monthly when you haven't opened it in a long time.  If you don't care about locate - you can also just delete the file entirely - if it's just a browser machine with no new files - it doesn't matter anyway.
]]>
Adam Nelson
tag:varud.com,2013:Post/603852 2009-09-05T16:16:54Z 2013-10-08T17:30:30Z Inglourious Basterds

To even conceive of this movie strikes one as deeply contingent: a revisionist historical plot about an anti-Nazi band of assassins bent on revenge for the atrocities committed during WWII punctuated by low comedy and grand action.

Quentin Tarantino does a masterful job of creating an homage to 30s film, a movie divided into vignettes painting characters in broad strokes, men - good and bad, a classic setting of occupied France, and a grand finale so full of deep satisfaction for the viewer it's hard not to grin with a thorough sense of  exaltation at the meting out of deeply deserved retribution.

The characters are: the Apache/Appalachian mongrel ready to lead his Jewish charges into battle (Brad Pitt), the stunning Film Noir heroine whose family was murdered by the Gestapo, a Negro projectionist reminiscent of Sidney Poitier, two Jewish machine gunners/head bashers (among other heroes), the evil Nazi 'Hunter', a German turncoat actress, and the Nazi high command. [Some might find these descriptions offensive but I'm trying to stay true to the dialogue]

It's an affecting movie that allows the viewer to feel a sense of schadenfreude.  The opening introduces us early on to the very real, very straightforward atrocities committed by the Nazis on a regular basis.  All of the action takes place in France - allowing one to feel a sense of normalcy unavailable to other theaters of the war closed to thorough mental examination.  The Concentration Camps, Normandy, Dresden - these are zones of European slaughter on such a scale that it renders the feeling person's senses numb.

At this point, we are introduced to a team of Americans sent behind enemy lines to terrorize the Nazi troops.  The critical mark of their attacks is scalping the dead to instill fear in the other units.  Early on we are desensitized to the murder - scalps are taken, swastikas are carved into foreheads, skulls are crushed with baseball bats.  All of this is a reminder of the base anger of war - death comes on a wave of righteous hate.  We are red in tooth and claw regardless of who killed first.

We are then moved forward to the main storyline.  A plot has been hatched by the Allies and independently by the Jewish Heroine who is dropped into the fortuitous situation of hosting a gala cinema opening at her theater for the German high command.

Twists and turns bring us to the final conclusion which really does leave the viewer almost revelatory at the outcome.

My main concern is how the early violence is used to prepare people for happiness at murder.  The villains are so beyond reform that there isn't a scintilla of restraint in the viewer's vicarious thrill at the slaughtering of all who are hated.  This is unlike a horror movie or a drama - the viewer is deeply invested in the outcome - and roots for it.

Is this war?  I now see that we cannot live in a state without war.  It is the final act of hatred and condemnation and it will be with us as long as humanity knows right and wrong.

]]>
Adam Nelson
tag:varud.com,2013:Post/603847 2009-08-27T22:10:43Z 2013-10-08T17:30:30Z defaultdict to count items in Django One of the great things I discovered today is defaultdict().   This allows one to create dictionaries with a count for each item in a very compact and powerful way.  Look at this:
>>> for item in NoticeQueueBatchByUser.objects.filter(user=3).values('label','on_site'):
...   d[item['label']] += 1
...
>>> d.items()
[(u'pagedisplay_violation', 4)]
>>>
>>> from collections import defaultdict
>>> d = defaultdict(int)
>>> for item in ExampleModel.objects.values('label','other_info'):
...   d[item['label']] += 1
...
>>> d.items()
[(u'key_1', 4),(u'another_key',2)]
>>>
This will give you the number of times each label appears in ExampleModel.objects.values('label','other_info')]]>
Adam Nelson
tag:varud.com,2013:Post/603845 2009-08-25T13:55:08Z 2013-10-08T17:30:30Z Boycott of Fox

One of the happy things I saw today is that the boycott of Fox (and in particular Glenn Beck) is actually picking up steam.  Companies like WalMart, UPS Stores, etc... have decided to join the campaign.


I don't think you need to sign up here:


http://foxnewsboycott.com/


But perhaps just contact one or two companies with which you do business and let them know that you're dissatisfied with their support of this low-brow vitriol.  And yes, it is their responsibility to vet their advertising outlets.


I remember seeing Glenn Beck when he was on CNN Airport while waiting to re-enter the country (yes, real Americans explore the world) as he was telling Arabs to get out of the country and accusing them en masse for all America's ills.


Sadly, Glenn Beck reaches the most vulnerable people in our society - those with limited IQ.  We need to help them.

]]>
Adam Nelson
tag:varud.com,2013:Post/603843 2009-07-29T17:39:52Z 2013-10-08T17:30:30Z Django 1.1 Released

Wow!  Django 1.1 has finally been released.  This has been a great effort by the authors and should be a huge boon to the community.


http://www.djangoproject.com/weblog/2009/jul/29/1-point-1/


I'm already doing everything with 1.1 so when this anti-fraud product is ready, it will be on there.  Key features (for me) include grouping and advanced admin customization methods.

]]>
Adam Nelson
tag:varud.com,2013:Post/603842 2009-06-17T14:07:50Z 2013-10-08T17:30:30Z Install Eclipse Galileo for Django and Pinax – Part 2

Now, onto getting Pinax installed for development purposes:

I really don't like how things have gone with Pinax installation in the past few months. I'm going to give my own take on things. Pinax has moved to GitHub and so should you.

First, create an account at GitHub. Then, fork Pinax so you have your own copy to work with. For me, I have my development version here. If you want this repository to be private, you can pay GitHub $7/month - well worth it if this is a corporate gig.

Now you can follow the Pinax directions from your own fork.

$ curl -O http://github.com/AdamN/pinax/raw/master/scripts/pinax-boot.py

$ python pinax-boot.py --development ../Documents/workspace/WhateverYouNamedProject/src/pinax

$ source ../Documents/workspace/WhateverYouNamedProject/src/pinax/bin/activate

(pinax)$ cd ../Documents/workspace/WhateverYouNamedProject/src/pinax

(pinax)$ pip install --requirement src/pinax/requirements/external_apps.txt

At this point, if you have an existing installation of Pinax on your machine, you may get an error about django-wikiapp being the wrong version. In a new command line tab, run:

pip install django-wikiapp==0.1.2

If it says it's already installed, you may need to delete the existing django-wikiapp in the directory shown by the error in the pip install command above. Now it's time to start a project:

(pinax)$ pinax-admin clone_project basic_project myproject

You now have a Pinax site in myproject/ directory

(pinax)$ cd myproject

(pinax)$ ./manage.py syncdb

(pinax)$ ./manage.py runserver

Now, go to your browser and navigate to http://127.0.0.1:8000

Hopefully, you now see a Pinax website in all its glory.

]]>
Adam Nelson
tag:varud.com,2013:Post/603841 2009-06-16T22:15:09Z 2013-10-08T17:30:30Z Install Eclipse Galileo for Django and Pinax - Part 1

Today I decided to see if I could do a proof of concept Django/Pinax deployment for an anti-fraud advertising service. Since they already use PHP for their legacy codebase and since the existing developer uses Eclipse and the Zend Plugin for debugging, I decided to use Eclipse for the proof of concept (since I have to convince that person to switch to Django).

First things first. Update your Mac to the latest and greatest software and JVM. A new JVM literally came out today so that's what I did.

Now, go download the bleeding edge release candidate Eclipse for Java EE Cocoa Version - Galileo. Some people will probably want something else, but I say that Cocoa apps are way nicer than Carbon ones and anyway, I went with Galileo. The Java EE version has web development tools built in there which I thought would be important. Feel free to comment below with a leaner way to do this - I'm not using Java.

Now, grab the new version of Subversion (if you're using Subversion). I got, and highly recommend, version 1.6.2. I usually don't get the latest SVN packages but when trying 1.4.x it did not work.

Spin up Eclipse and grab some plugins. You'll need the PyDev and Subclipse plugins. Go to Help > Install New Software, and install these:

Get the 1.6 series of Subclipse here:

http://subclipse.tigris.org/update_1.6.x

PyDev for debugging and highlighting:

http://pydev.sourceforge.net/updates/

For Subclipse, you'll get alot of options. Just install everything (again, I'm usually lean but I tried not installing everything and had issues - I would just do it all).

Now, run through the fantastic Getting Started guide on the PyDev website. It's written for Windows and a previous version, but you should be able to get through everything.

When you've gone through that, it's time to get your existing SVN project into the project you created for PyDev above. Just right-click on the project and select 'Import'. From there, choose SVN and import from your svn repository (if you have one - otherwise, skip this).

I will get to Pinax on the next post, when I figure it out :-)

]]>
Adam Nelson
tag:varud.com,2013:Post/603840 2009-06-05T16:18:59Z 2013-10-08T17:30:30Z Amazon Web Services

Whew, it's been a long time since a post.  Here's a small presentation about Amazon Web Services:

http://docs.google.com/EmbedSlideshow?id=dgqmn3rs_80hmx2dfcs

]]>
Adam Nelson
tag:varud.com,2013:Post/603839 2009-03-17T13:29:30Z 2013-10-08T17:30:30Z The Church is involved in Grand Theft Auto?

I found this funny line in the proxy statement of TTWO:

"The Sisters of St. Joseph of Nazareth, Michigan has submitted a stockholder proposal for consideration at the Annual Meeting..."

Basically, a convent is an investor in the producers of Grand Theft Auto and Bioshock :-)

I like their proposal though - cut out some of the fat cat pay until we see some performance improvements in the bottom line.

]]>
Adam Nelson
tag:varud.com,2013:Post/603838 2009-03-16T23:19:06Z 2013-10-08T17:30:30Z Cloud Computing

I have alot to do right now - so what better time than now to write about stuff that has nothing to do with anything.  Recently, Brad Feld and Albert Wenger have thrown themselves into the ring with comments about the future of 'the cloud' and whether it's ready for real usage.  It most definitely is ready.

First of all, let's just give everybody a little primer on what cloud computing is.  Cloud computing is very simple: it's a network of computers (cloud) that can be used by people or organizations to run software at a metered price.  In theory, the price could be free, but just like free electricity, it costs somebody something.

Why is this a big deal?  We had that in the 70s.  It's true, the whole point of VAX and UNIX systems was to allow multiple programs to run on the same physical infrastructure without interfering with everybody else.  The problem with those time sharing implementations is that they share many different high level resources.  In the olden days (and still), you had one root user who had ultimate control, and was trusted.  Now, everybody will say at this point, "Hmm, I understand, it's a trust issue."  There is most definitely a trust problem but that's not the root of it (no pun intended).  The problem is sharing.  Just like with little children, sharing precedes trust.

Do you know what's great about your desktop?  What's great is that you get to decide whether you want to install Skitch or not.  You get to decide if you like a solar system for your wallpaper.  You know what's not great about a shared system, you are constrained by having to share it with others who have different needs.  If you want Python 2.5.2 instead of Python 2.5.1, open a ticket with the owner of the system and wait. In the meantime, make your program work on Python 2.5.1 because you have to - and add a little bit more clutter to a cluttered world.

Cloud computing doesn't change the fundamentals of sharing a server, it just reduces the amount of stuff that is shared.  In a large system, if you are a small user (i.e. not a top 1,000 site), you can basically pretend that the machine is infinite.  What you are sharing is simply the CPU, the memory, the hard drives, etc... However, you get minimum amounts of those things that nobody else can infringe upon, and which are commodities at this point.  Most importantly, you're not sharing anything for which there are many different options.  Somebody could in theory say, "I want to program for an ARM processor, but the cloud doesn't support that".  However, very few are doing large scale Internet deployments on something other than x86/SPARC using standard device architectures.  With the cloud, I simply don't have to share as much.

All the arguments against the cloud miss the point.  One has to view it in a real world context.  There are only a few alternatives to cloud computing.  One is to share it with other people the old fashioned way (see above).  Except for cost considerations, this is not a better option because other people are stepping on your turf.  The other option is to get a dedicated server.  The problem with the dedicated server is that you only have one of them.  There's no redundancy at the machine level.  If the network device goes, you're out of luck.  You're getting the gain of having your own system while giving up the economies of scale that go with sharing.

Cloud computing is about sharing what everybody agrees are standard (CPUs, Memory IO, etc..) while not having to share things like high level language libraries - for which many options may be necessary.

]]>
Adam Nelson
tag:varud.com,2013:Post/603837 2009-03-16T14:40:29Z 2013-10-08T17:30:30Z OpenGov at SXSW It might be a little late to blog about it - but this looks like a great event.  When I was still contemplating going to Austin, I had this in my list of things to check out - but then I decided not to go (save $$$/get things done) and didn't think about this until cleaning up my email.  Anyway, it starts in 2 hours for anybody out there: 

-----

We (LoTV) are going to do Ignite style talks at lunch during SXSW interactive.  I think this really is relevant to the Open Data group because in MY opinion Open data is the key to open Govt.  Anyone wanna talk about potential applications they are working on that are going to SXSW? 

We are an Official SXSW Event 
We are an Official Ignite.oreilly.com event 
and we are an Official Sunshineweek.org event 

 They will be professionally videotaped and posted to multiple sites afterwards in 5 min increments. 

I would love it if you could blog about it? 

1) to recruit people that are already attending SXSW 
2) to highlight the projects coming 
3) post the video links of "hot" projects afterwards? 

 It is 12:30 -2:00 Mar 16th 
Fiddler's hearth 
301 Barton Springs Rd 
Austin, TX 78704 

 If you are coming to SXSW and have a project you would like to talk about IGNITE style - 20 slides 15 seconds per slide (auto forwarding)  Please let me know so we can put you on the schedule! 

Thanks! 
 -- Silona Bonewald 
Gtalk: Silona@gmail.com is the BEST WAY to contact me 
http://leagueoftechvoters.org - 501c3 to involve geeks in the political process 
http://whitehouse.wikia.com - versioning and documenting whitehouse.gov 
http://transparentfederalbudget.com - anyone documenting the budget on a paragraph by paragraph basis 
http://wiki.budgetwiki.com the wiki where we are designing it all Cell: 512-750-9220 (I check voice messages only once a day!)
]]>
Adam Nelson
tag:varud.com,2013:Post/603835 2009-03-06T21:11:33Z 2013-10-08T17:30:30Z Reverse Generic Relations First off, apologies for not posting in some time.  I've been spending the past week exploring a couple of different technologies - here's a short summary:
  • Amazon ec2 - I know I've talked about this before but these guys are way ahead of the game.  I'm not deep enough in the field to confirm where the other players are (Google App Engine, etc...), but I'm continually impressed by ec2.  Cloud computing really is the future.  I can only think of a handful of projects for which I would not deploy a cloud computing infrastructure (internal file server I suppose).  All new Internet applications should be placed on a cloud system.
  • PHP - I'm sorry to say this, but I'm short on PHP.  I really don't know if there's a future for it.  Maybe some of the frameworks (Zend, Cake) can save it - I'm not sure.  One thing I know is that I'm leaving PHP behind.
  • On the vein of PHP, which BTW is a solid OO language now, I want to be clear about the power of OO web frameworks .... like Django, that they are critical to future development.
  • Rails documentation is really bad.  Try to figure out what I'm about to tell you with Django in a Rails way - I dare you [Comments section open]
Django has something called ContentTypes which allow for the creation of models that can generically relate to many other objects.  A straightforward example would be a tagging system.  Tags can be placed on blog post, a page, a person, a group, etc...  When creating the Tag model, you could think ahead about all the possible applications of a given tag, but that would be crazy.  Why not just use a standard ContentType relation to handle it - in other words a relation to any other model that works as you create it.  That's the power of this object.  In the database table, it uses two columns: one is the id of the Content Type itself (User, Bookmark, Tag, etc..), the other is the id of the object (The id of the user, the bookmark, the tag, etc...).  Because the system is smart though, it's all available abstractly. 

But wait, it doesn't stop there.  If you do want make life even easier for a model like a Bookmark, so that you can get the tags for it, you can use their Reverse Generic Relations to give you object oriented access to all the children of a given parent (all the tags of a given bookmark).  And, for each of those children, you can find their children (i.e. all the votes for a given bookmark a la Digg).  All this is object oriented and requires zero SQL. 

Kudos to the Django team for creating such a great abstraction - but also for documenting it so well.  I am now a Django evangelist.
]]>
Adam Nelson
tag:varud.com,2013:Post/603834 2009-02-20T17:18:44Z 2013-10-08T17:30:30Z NY Times Open Here is a post on the morning session of the NY Times Open.  Janet Robinson, the NY Times President (?) gave the opening based on getting people to remember both the heritage of the paper (Pulitzers) and the future by leveraging the creativity of the developer world to make the content better: interactivity, global comment/feedback, openness. 

Derek Gottfrid, Sr. Software Architect, talked about really getting this whole thing working.  They want their APIs to be mashable (in fact, hooking into Mashable), and to get that data out there. 

Now, for the main event.  The big man is out, Tim O'Reilly.  He's wearing the standard cargo pants - black this time.  His first point is that companies and organizations have to breed innovation by having conferences, bringing people together, etc...  Here are his points:
  1. Harnessing Collective Intelligence - Wikipedia of course - but there's more to it.  Digg is another obvious one.  The heart is still Google.  Google simply does a great job with PageRank.  PageRank introduced aggregated social analysis with link network density.  Links matter!  And they still do.  Wesabe gives tips on personal finance based on aggregate data that the credit card companies had for years.  
  2. Real Time is critical - Just to go back to Google, they are fast and time sensitive.  Wal Mart is a great example of accomplishing this time with real time data modeling that allows for better inventory/pricing management.  On the Obama call lists, if it was found that somebody had voted, they were taken off the get out the vote list in real time.
  3. Network effect allows companies to stay ahead if they can harness their network.  What assets do you have in your network?  What are people telling you that is just being dropped on the floor?  Does the accumulated history help in any way?  And .... don't forget to make it faster.
  4. Social Networking is a breakthrough - namely with the social graph from Facebook.  For NYTimes though, they have a slice that they're working with.  They are writing "All the news that is fit to print".  The Times has a long history of interacting with readers through the opinion section.  For instance, on NY Times People, nobody really uses it.  Twitter and FB should be integrated - that's what the users want.
  5. Programming as Journalism - USASpending.gov which is really a clone of FedSpending.org.  This is programmer driven reportage.  Programmers really need to be leveraged for story creation.  StimulusWatch.org is another site that really shows how the money is being spent.  InSTEDD.org is also another example of reporting by programmer.  GVFI.org also.
  6. Instrumenting the world: Keyboards are not the ideal input for collaboration.  Cameras, motion sensors, GPS, etc...
  7. Internet as Platform.  The goal of becoming a platform is not to get everybody onto your controlled network.  You have to link to the context of the news.  O'Reilly Radar uses mouseover previews that allow people to see context rather than clicking off a page.  Google Maps is the model where Google is following the hacks.  Partners and random hackers may create new features before you do.
O'Reilly's main point is that we're on a road, we don't know where we'll end up.  We have to keep gas in the tank but it's also important not to just make the trip into a tour of gas stations.  Think of what you do, and make it happen.
]]>
Adam Nelson
tag:varud.com,2013:Post/603833 2009-02-19T15:46:50Z 2013-10-08T17:30:30Z Hiring People

Recently, I've been helping a seed-level company find somebody to be their head of technology.  This is a company that's been around for over a year, has revenue, and has 3 full time employees and 2-3 part time people.

Because the CEO isn't a tech person, there's a slight difference with his decision making versus mine.  Luckily, I can understand a programmer's work output and judge it objectively.  Unfortunately, for non-tech people, that is not possible.  So, he found a part time CTO, let that person make an app with some contract workers, and is now in the situation of trying to move beyond that situation by having somebody full time in the same office.  It's a great growth step and everybody is in favor (including the CTO who understands that his location and time commitment aren't ideal for the next year).

Here are my requirements for somebody in this position.  All people considering such a hire, even in the beginning, should consider these needs:

  • Somebody who can stand his/her own with you and any existing tech person (i.e. has enough experience that you two won't doubt his/her decisions).
  • Somebody who can program a web 2.0 language (PHP, Django, Ruby on Rails, Javascript)
  • Somebody who lives in the current web 2.0 world so he/she can tell you about trends and make intelligent decisions.
  • Somebody who can communicate effectively.
  • Somebody who has managed at least 2 other people before - but preferably not more than 8.
  • Somebody who is trustworthy and responsible.

Do not, I repeat, do not, hire somebody who doesn't meet 5 of these 6 criteria.  Did I say that clearly?  Do not, I repeat, do not, hire somebody who doesn't meet 5 of these 6 criteria.  Ideally, meet all the requirements.  Obviously, the second to last requirement is geared towards a small startup with less than $1MM and looking to create an Internet software company.

Another important issue is education and credentials.  The person should have a B.S. degree in something with some sort of computer science course work.  If you went to a top school and think you are stuck with, or can get away with, somebody who didn't go to a similar calibre school, you're just putting yourself in a poor position.  The head of technology is probably the most important person in the company in addition to the CEO and the sales/marketing lead.  If that person is not your peer, or the peer of your younger self, you won't respect them - and that will take you nowhere.

]]>
Adam Nelson