Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Next Gen Dev Mdl #112

Open
fscottfoti opened this issue Sep 9, 2014 · 0 comments
Open

Next Gen Dev Mdl #112

fscottfoti opened this issue Sep 9, 2014 · 0 comments

Comments

@fscottfoti
Copy link
Contributor

I want to open up an issue about the work that probably needs to be done soon on the next generation developer model. I'm going to outline how this might look from my perspective but I expect there will be feedback from multiple places. @jdoliveira and @cvanegas I'm looking at you ;)

OK, as a reminder, this is how the developer model works now. Basically we take in a set of parcels, with max FARs, max height limits, parking requirements, a standard set of costs, another set of prices, and probably a few other things. We first precompute the whole set of possible inputs on a limited set of input parcels. Specifically we test multiple FARs and figure out the "break even" price that if this price is exceeded in the marketplace, the development is profitable. This is really nice from a performance standpoint because we can precompute feasibility and then everything from that point on is a lookup for each parcel and the inputs associated with that parcel. This is VERY fast and so really useful for regional modeling in which case we're testing 2M parcels for feasibility every year. Keep in mind each UrbanSim simulated year runs in about 6 minutes, and the feasibility model is already 1 minute of that time. The proposal I'm going to describe here is likely to be MUCH slower. I will describe how we might parallelize it and run in C, but I just don't think there's any getting around that it will be an order of magnitude slower than the current implementation, and the bottleneck of the simulation. As such, its primarily use case will be subarea studies, although we can try it in UrbanSim too.

In general I'm thinking along these lines. In UrbanCanvas we have what we call development types, which in my mind are similar to "products" that a developer might describe. Here's a discussion of standard developer product types and also the need for alternatives. UrbanFootprint for instance uses about 90 building types. At any rate, we will also have some set of building types, which we call development types. The idea here is integrate these development types into the inner loop of a feasibility calculation. So in our case, development types will be a product type, like "affordable townhome, wood construction" or "luxury condo, mid-rise." Therefore, our development types provide a number of implicit assumptions, which are detailed and very useful to both feasibility and visualization -

  • Most importantly we have can create the "buildable area" based on height limits and setbacks and odd shaped parcels and figure out if we can actually fit the available amount of FAR on the parcel
  • There is an implicit assumption of materials which should help define a specific product type right from the RSMeans handbook so we can get the cost of specific development type
  • There is an implicit assumption of quality of the resulting building, which will define where on the price curve this building might exist (is it luxury or affordable and balance the prices accordingly)
  • Other inputs might be tied to developments types, like unit mix and parking requirements, etc. For instance, luxury condos might have a different unit mix and parking requirements than other developments
  • The visualization of the building is tied directly to all these things so that the visualization is at the same level of detail as the analysis

OK, so my proposal is that we do this the slow and accurate and easy to modify way now. I'm assuming that each parcel has a certain number of development types that we test on it. Like, we might test townhomes and mid-rise condos on certain parcels. Mid-rise condos probably have a range of densities, so we might test them at the max far allowed by zoning (assuming it's allowed by the geometry) as well as the inflection points (which are essentially the points just below heights at which construction costs increase). But there is a clear set of development types/FARs that we test, probably on the order of about 10-20 forms per parcel. The development types already give us space by use (like, the number of residential sqft and retail sqft and office sqft, etc) and we can translate this to a feasibility using a pencil out pro forma (exactly the same as we do with penciler). We go from gross to net square footage, have unit mix multipliers (to give us unit mix), prices per unit, costs per gross square footage, and parking requirements and size use per parking space and cost per sqft for different kinds of parking, etc (these were all in Penciler).

Now, having implemented this in Penciler (in Javascript) I can say it's quite a bit easier to code, maintain, and understand a pro forma written in this way than the current vectorized implementation written in UrbanSim/numpy (remember the current version is written for performance). Here, I think we know we have to avoid Python for its for loops, so I think we have to go to Cython, Numba, or similar. I don't have a great deal of experience with Cython so there's overhead in learning it, but I imagine it will be pretty easy to read once it's learned and written. In this way, we only really need to code the pencil out pro forma for one development type at a time (with for loops rather than vectors and if statements rather than multiplies), and then wrap it up for the 10-20 development types that are allowed per parcel. I'm guessing we'll need to call the CGA parser locally as a C library in the inner loop. I do not think it will be possible to call this as a service as it might take too long, but we can try it that way at first since we already have it compiled and running that way. Since it's in Cython, it will be as fast as C, and it can be parallelized with OPENMP for platforms that support that. Don't think this means fast though - if this is run a million times it will be very slow. Presumably this can also be integrated directly into UrbanCanvas (Cython compiles to native C code, though I don't understand the details yet).

This sort of approach will solve a number of problems for us, as numerous projects are asking for subarea high-detail pro forma calculations, with the ability to modify inputs and see the results at a neighborhood scale. Sooner or later we'll probably have to go this way and I wanted to start a discussion on how we can get it done.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant