UrbanVision Features |
UrbanVision is an open-source system for urban modeling, simulation, and visualization, built to extend UrbanSim. It is a joint project of the University of California Berkeley and Purdue University. Main website: <link forthcoming>. These pages contain a visual summary of the project.
Overview
UrbanVision is a server-client system that interactively provides simulation and visualization outputs. The simulation outputs are the focus of the Berkeley team while the automated 3D urban model creation and several visualization outputs are the emphasis of the Purdue team.
The simulation outputs include the prediction and analyses of alternative land use and transportation policies and plans, the automatic and detailed simulation of resulting real estate development and redevelopment at a parcel level. We also provide capability to alter land use policies such as density and land use mix within localized areas, and redo the analysis rapidly.
The visualization outputs include compelling interactive views of 3D urban models created automatically from the simulation outputs. The system supports fly-through and walk-through interfaces to allow non-technical stakeholders to more directly interpret the nature of the land use and the outcomes from the simulation models. In particular, we provide several geometrical processing capabilities for the input GIS style data, procedurally generated building models, an adaptive vegetation model, and a dynamic simulation of plausible vehicle and pedestrian behavior based on the travel-model simulation outputs.
Altogether, the system’s capabilities provide valuable assistance in the difficult task of engaging the public and key local stakeholders in evaluating alternative approaches to attaining more sustainable communities, achieving consensus on the policies and plans to implement.
The simulation outputs include the prediction and analyses of alternative land use and transportation policies and plans, the automatic and detailed simulation of resulting real estate development and redevelopment at a parcel level. We also provide capability to alter land use policies such as density and land use mix within localized areas, and redo the analysis rapidly.
The visualization outputs include compelling interactive views of 3D urban models created automatically from the simulation outputs. The system supports fly-through and walk-through interfaces to allow non-technical stakeholders to more directly interpret the nature of the land use and the outcomes from the simulation models. In particular, we provide several geometrical processing capabilities for the input GIS style data, procedurally generated building models, an adaptive vegetation model, and a dynamic simulation of plausible vehicle and pedestrian behavior based on the travel-model simulation outputs.
Altogether, the system’s capabilities provide valuable assistance in the difficult task of engaging the public and key local stakeholders in evaluating alternative approaches to attaining more sustainable communities, achieving consensus on the policies and plans to implement.
Summary of Features
In the following, we provide a summary of the main features enabled by the Purdue team and collaborators. Details about the simulation outputs are forthcoming.
GIS Data SupportUrbanVision is designed to serve as a platform for planning projects worldwide. For this reason, the software supports standard GIS data formats for loading roads, parcels and aerial imagery. Roads are loaded from OpenStreetMap data and parcels are read from ESRI Shapefiles. Furthermore, these geometric entities can be stored in PostGIS relational databases, and dynamically fetched and queried from UrbanVision. Georeferenced Aerial orthographic imagery is also supported using the GeoTIFF format.
|
Road Networks, Blocks and ParcelsUrbanVision uses internal data structures for storing GIS data that allow for very efficient computation, visualization, editing and querying. Road connectivity is computed and stored in a boost graph, where routing and flow algorithms can be run, in some cases, at interactive speeds. Parcels are automatically aggregated into blocks and their adjacency is also computed in order to support per-block querying and editing operations such as aggregation and subdivision. Furthermore, while road and parcel data might come from different sources, UrbanVision preprocesses this data and stores connectivity information between roads and parcels. This data integration is crucial for some of the UrbanVision behavioral models that operate on both types of data, such as the travel model and the accessibility model.
Several operations that involve geometric computations are also available in UrbanVision. These operations are frequently used by the behavioral models in the system. The supported computations include consolidating two or more adjacent parcels, subdividing a parcel based on given subdivision attributes, computing the buildable area inside a parcel based on setback distances from neighboring streets and adjacent parcels, and computing the right of way associated to a new or existing road segment. |
Terrain and Aerial ImageryDigital elevation models and high resolution ortho imagery are combined by UrbanVision to provide a 3D representation of the terrain. Terrain elevation is read from DEM files and orthoimagery is loaded from GeoTIFF files. UrbanVision supports static and dynamic tiling of aerial imagery. Static tiling consists of loading a fixed set of images and projecting them onto the terrain. Dynamic tiling is a demand-based approach where fragments of images are loaded with different sizes and resolutions depending on the camera pose.
|
|
Procedural and Parametric BuildingsUrbanVision supports automatically generating a plausible set of 3D building envelope models based on GIS input and simulation outputs as well as including an explicit set of city landmark buildings when so desired. While the objective of UrbanVision is not to precisely recreate a current city, it does support creating a qualitatively similar model which can be easily recreated for different predicted futures of a city. Following a building typology study for the destination city, the system includes a set of base building types (e.g., 14 in the case of San Francisco), which are configured using parameters to depict a rich variety of building geometries. While each base type (e.g., school, big-retail buildings, offices, etc.) captures common structural characteristics, in total a much larger number of building styles are possible due to the parameterization (e.g., over 60 styles for San Francisco Bay Area). The control parameters, such as number of stories, footprint geometry, square footage, and frontage, are derived from the underlying parcel information. If there is not enough information in a parcel to derive a building geometry, heuristics are applied from nearby properties to determine a best guess. In addition to supporting automatic generation of buildings, famous landmark buildings can be explicitly imported into the system. This collection of models contains unique and fixed-location buildings, which are either modeled from scratch or exported from Google Warehouse/Earth.
|
|
Adaptive VegetationUrbanVision uses virtual vegetation to add realism to the cityscape. Individual plant positions are generated randomly placed, obtained based on procedural rules, or extracted from per-parcel vegetation information. For example, vegetation is seeded along sidewalk center lines and inside parcels. In addition, there are two types of collision detection that the vegetation responds to: plant to plant collision and collision against areas in the terrain marked as obstacles. If either type of collision is detected, plants may be removed. Plant to plant collision is detected by computing the intersection between circles that represent the plant footprint. Collisions with obstacles are computed by automatically creating an auxiliary obstacle map from the roads, sidewalks and building footprints, and seeding plants only in areas with no obstacles. This approach allows for regeneration of vegetation at interactive rates whenever the city geometry changes. Finally, detailed tree models, of several species, are efficiently rendered using advanced multi-texturing techniques.
|
|
Dynamic Traffic and PedestriansUrbanVision shows moving vehicles and pedestrians drawn over road lanes and sidewalks, all automatically generated based on values from the underlying road graph and the simulation outputs. In particular, the vehicular and pedestrian density (and speed) values are obtained from the travel model simulation output. The road lanes and sidewalks are computed from the road graph and parcel geometry information. Each vehicle follows the path of the center polyline of the road graph and tries to reach the maximum lane speed. If a vehicle detects that the vehicle in front is too close, it will decrease its speed. Similarly, if a vehicle detects that a vehicle behind it is getting too close, it will increase its speed. When vehicles approach intersections or the end of road segments, they are removed from the front of the lanes and pushed to the back of the lanes.
Pedestrians are animated characters of various types (e.g., male, female, business attires, casual clothing) moving over the sidewalks of the cityscape. Each sidewalk has a travel-model computed density value. Pedestrian character types are generated stochastically based on a desired global distribution. Sidewalks are divided into invisible lanes that pedestrians follow at a predetermined speed. A pedestrian is animated by using pre-recorded templates of an underlying skeleton model augmented with vertex skinning. |
|
Rendering AccelerationWe implemented several toolboxes to obtain efficient rendering performance within UrbanVision. For the terrain, vehicles and trees, vertices are sorted based on material id and subsequently vertex buffer objects (i.e., VBOs) are generated for efficiently storing them inside the GPU -- this optimization provides a significant speedup since all memory accesses are local to the GPU. In a similar style, all vertices of the procedural buildings sharing the same OpenGL state are also grouped into compact VBOs. We observed speedups of 200 times over using traditional OpenGL display lists (see video).
For animating pedestrians, we use skinning and instancing. The skinning technique provides improved rendering quality but at a cost superior to simple tweening or hierarchical mesh animation. Thus, we use bone instancing in order to optimize skinning performance – bone instancing implies that every pedestrian uses one of the pre-calculated bone matrix sets. For vegetation rendering, the complex geometrical appearance of every plant is rendered using texture-based billboards. In addition, three levels of detail for every tree species are pre-generated and all textures of the billboard are combined into compact texture atlases. View frustum culling is applied to roads, vehicles, trees, and pedestrians for reducing the rendering load. |
|
User InteractionUser interaction for the system is divided into three subparts: mouse navigation, widget-based, and multi-touch screen supported navigation. All navigation items are synchronized with each other and with internal virtual camera viewpoints. Furthermore, they are aimed for novice users. Three different camera modes allow users to fly over a city, examine the city from a map-style view, and visit the city from street level. Navigation controls mimic the behavior of common virtual globe viewers, with some additional improvements specific to the system.
The multi-touch navigation is developed and tested on two 42" PQ Labs Multi-Touch G³ screens, providing up to 32 simultaneous touch points. Functionality includes dragging, rotating, translating, panning, resetting, and selecting objects using gestures. These representative gestures are chosen based on the touch interface of common hand-held devices, however events including more than two fingers and large touch areas are also implemented. Although the current gesture API is specific to our hardware, the underlying gesture handling mechanism is easily extensible to support other hardware APIs and gestures in the future. |