Urban (Forward and Inverse) Procedural Modeling
Proceduralization for Editing 3D Architectural ModelsInverse procedural modeling discovers a procedural representation of an existing geometric model and the discovered procedural model then supports synthesizing new similar models. We introduce an automatic approach that generates a compact, efficient, and re-usable procedural representation of a polygonal 3D architectural model. This representation is then used for structure-aware editing and synthesis of new geometric models that resemble the original. Our framework captures the pattern hierarchy of the input model into a split tree data representation. A context-free split grammar, supporting a hierarchical nesting of procedural rules, is extracted from the tree, which establishes the base of our interactive procedural editing engine. We show the application of our approach to a variety of architectural structures obtained by procedurally editing web-sourced models. The grammar generation takes a few minutes even for the most complex input and synthesis is fully interactive for buildings composed of up to 200k polygons.
|
Inverse Procedural Modeling of 3D Models for Virtual WorldsThis course presents a collection of state-of-the-art approaches for modeling and editing of 3D models for virtual worlds, simulations, and entertainment, in addition to real-world applications. The first contribution of this course is a coherent review of inverse procedural modeling (IPM) (i.e., proceduralization of provided 3D content). We describe different formulations of the problem as well as solutions based on those formulations. We show that although the IPM framework seems under-constrained, the state-of-the-art solutions actually use simple analogies to convert the problem into a set of fundamental computer science problems, which are then solved by corresponding algorithms or optimizations. The second contribution includes a description and categorization of results and applications of the IPM frameworks. Moreover, a substantial part of the course is devoted to summarizing different domain IPM frameworks for practical content generation in modeling and animation.
|
Interactive Sketching of Urban Procedural Models
3D modeling remains a notoriously difficult task for novices despite significant research effort to provide intuitive and automated systems. We tackle this problem by combining the strengths of two popular domains: sketch-based modeling and procedural modeling. On the one hand, sketch-based modeling exploits our ability to draw but requires detailed, unambiguous drawings to achieve complex models. On the other hand, procedural modeling automates the creation of precise and detailed geometry but requires the tedious definition and parameterization of procedural models. Our system uses a collection of simple procedural grammars, called snippets, as building blocks to turn sketches into realistic 3D models. We use a machine learning approach to solve the inverse problem of finding the procedural model that best explains a user sketch. We use nonphotorealistic rendering to generate artificial data for training convolutional neural networks capable of quickly recognizing the procedural rule intended by a sketch and estimating its parameters. We integrate our algorithm in a coarse-to-fine urban modeling system that allows users to create rich buildings by successively sketching the building mass, roof, facades, windows, and ornaments. A user study shows that by using our approach non-expert users can generate complex buildings in just a few minutes.
|
Procedural Editing of 3D Building Point CloudsThanks to the recent advances in computational photography and remote sensing, point clouds of buildings are becoming increasingly available, yet their processing poses various challenges. In our work, we tackle the problem of point cloud completion and editing and we approach it via inverse procedural modeling. Contrary to the previous work, our approach operates directly on the point cloud without an intermediate triangulation. Our approach consists of 1) semi-automatic segmentation of the input point cloud with segment comparison and template matching to detect repeating structures, 2) a consensus-based voting schema and a pattern extraction algorithm to discover com-
pleted terminal geometry and their patterns of usage, all encoded into a context-free grammar, and 3) an interactive editing tool where the user can create new point clouds by using procedural copy and paste operations, and smart resizing. We demonstrate our approach on editing of building models with up to 1.8M points. In our implementation, pre-processing takes up to several minutes and a single editing operation needs from one second to one minute depending on the model size and the operation type. |
Example-Driven Procedural Urban Roads
Synthesizing and exploring large-scale realistic urban road networks is beneficial to 3D content creation, trac animation, and urban planning. In this paper, we present an interactive tool that allows untrained users to design roads with complex realistic details and styles. Roads are generated by growing a geometric graph. During a sketching phase, the user specifies the target area and the examples. During a growing phase, two types of growth are effectively applied to generate roads in the target area; example-based growth uses patches extracted from the source example to generate roads that preserve some interesting structures in the example road networks; procedural-based growth uses the statistical information of the source example while effectively adapting the roads to the underlying terrain and the already generated roads. User-specified warping, blending, and interpolation operations are used at will to produce new road network designs that are inspired by the examples. Finally, our method computes city blocks, individual parcels, and plausible building and tree geometries. We have used our approach to create road networks covering up to 200 km2 and containing over 3,500 km of roads.
|
Proceduralization of Buildings at City Scale
We present a framework for the conversion of existing 3D unstructured urban models into a compact procedural representation that enables model synthesis, querying, and simplification of large urban areas. During the de-instancing phase, a dissimilarity-based clustering is performed to obtain a set of building components and component types. During the proceduralization phase, the components are arranged into a context-free grammar, which can be directly edited or interactively manipulated. We applied our approach to convert several large city models, with up to 19,000 building components spanning over 180 km squares, into procedural models of a few thousand terminals, non-terminals, and 50-100 rules.
|
Inverse Design of Urban Procedural ModelsWe propose a framework that enables adding intuitive high level control to an existing urban procedural model. In particular, we provide
a mechanism to interactively edit urban models, a task which is important to stakeholders in gaming, urban planning, mapping, and navigation services. Procedural modeling allows a quick creation of large complex 3D models, but controlling the output is a well-known open problem. Thus, while forward procedural modeling has thrived, in this paper we add to the arsenal an inverse modeling tool. Users, unaware of the rules of the underlying urban procedural model, can alternatively specify arbitrary target indicators to control the modeling process. The system itself will discover how to alter the parameters of the urban procedural model so as to produce the desired 3D output. We label this process inverse design. |
Procedural Generation of Parcels in Urban ModelingAbstract. We present a method for interactive procedural generation of parcels within the urban modeling pipeline. Our approach performs a partitioning of the interior of city blocks using user-specified subdivision attributes and style parameters. Moreover, our method is both robust and persistent in the sense of being able to map individual parcels from before an edit operation to after an edit operation - this enables transferring most, if not all, customizations despite small to large-scale interactive editing operations. The guidelines guarantee that the resulting subdivisions are functionally and geometrically plausible for subsequent building modeling and construction. Our results include visual and statistical comparisons that demonstrate how the parcel configurations created by our method can closely resemble those found in real-world cities of a large variety of styles. By directly addressing the block subdivision problem, we intend to increase the editability and realism of the urban modeling pipeline and to become a standard in parcel generation for future urban modeling methods.
|
Building Reconstruction using Manhattan-World GrammarAbstract. We present a passive computer vision method that exploits existing mapping and navigation databases in order to automatically create 3D building models. Our method defines a grammar for representing changes in building geometry that approximately follow the Manhattan-world assumption which states there is a predominance of three mutually orthogonal directions in the scene. By using multiple calibrated aerial images, we extend previous Manhattan-world methods to robustly produce a single, coherent, complete geometric model of a building with partial textures. Our method uses an optimization to discover a 3D building geometry that produces the same set of façade orientation changes observed in the captured images. We have applied our method to several real-world buildings and have analyzed our approach using synthetic buildings.
|
Inverse Procedural Modeling by Automatic Generation of L-systems
Abstract. We present an important step towards the solution of the problem of inverse procedural modeling by generating parametric context-free L-systems that represent an input 2D model. The L-system rules efficiently code the regular structures and the parameters represent the properties of the structure transformations. The algorithm takes as input a 2D vector image that is composed of atomic elements, such as curves and poly-lines. Similar elements are recognized and assigned terminal symbols of an L-system alphabet. The terminal symbols’ position and orientation are pair-wise compared and the transformations are stored as points in multiple 4D transformation spaces. By careful analysis of the clusters in the transformation spaces, we detect sequences of elements and code them as L-system rules. The coded elements are then removed from the clusters, the clusters are updated, and then the analysis attempts to code groups of elements in (hierarchies) the same way. The analysis ends with a single group of elements that is coded as an L-system axiom. We recognize and code branching sequences of linearly translated, scaled, and rotated elements and their hierarchies. The L-system not only represents the input image, but it can also be used for various editing operations. By changing the L-system parameters, the image can be randomized, symmetrized, and groups of elements and regular structures can be edited. By changing the terminal and non-terminal symbols, elements or groups of elements can be replaced.
|
Interactive Example‐Based Urban Layout Synthesis
Abstract. We present an interactive system for synthesizing urban layouts by example. Our method simultaneously performs both a structure-based synthesis and an image-based synthesis to generate a complete urban layout with a plausible street network and with aerial-view imagery. Our approach uses the structure and image data of real-world urban areas and a synthesis algorithm to provide several high-level operations to easily and interactively generate complex layouts by example. The user can create new urban layouts by a sequence of operations such as join, expand, and blend without being concerned about low-level structural details. Further, the ability to blend example urban layout fragments provides a powerful way to generate new synthetic content. We demonstrate our system by creating urban layouts using example fragments from several real-world cities, each ranging from hundreds to thousands of city blocks and parcels.
|
Interactive Reconfiguration of Urban Layouts
Abstract. The ability to create and edit a model of a large-scale city is necessary for a variety of applications. Although the layout of the urban space is captured as images, it consists of a complex collection of man-made structures arranged in parcels, city blocks, and neighborhoods. Editing the content as unstructured images yields undesirable results. However, most GIS maintain and provide digital records of metadata such as road network, land use, parcel boundaries, building type, water/sewage pipes and power lines that can be used as a starting point to infer and manipulate higher-level structure. We describe an editor for interactive reconfiguration of city layouts, which provides tools to expand, scale, replace and move parcels and blocks, while efficiently exploiting their connectivity and zoning. Our results include applying the system on several cities with different urban layout by sequentially applying transformations.
|
Style Grammars for Interactive Visualization of Architecture
Abstract. Interactive visualization of architecture provides a way to quickly visualize existing or novel buildings and structures. Such applications require both fast rendering and an effortless input regimen for creating and changing architecture using high-level editing operations that automatically fill in the necessary details. Procedural modeling and synthesis is a powerful paradigm that yields high data-amplification and can be coupled with fast rendering techniques to quickly generate plausible details of a scene without much or any user interaction. Previously, forward generating procedural methods have been proposed where a procedure is explicitly created to generate a particular content. In this article, we present our work in inverse procedural modeling of buildings and describe how to use an extracted repertoire of building grammars to facilitate the visualization and quick modification of architectural structures and buildings. We demonstrate an interactive application where the user draws simple building blocks and using our system can automatically complete the building “in the style of” other buildings using view-dependent texture mapping or nonphotorealistic rendering techniques. Our system supports an arbitrary number of building grammars created from user subdivided building models and captured photographs. Using only edit, copy and paste metaphors, entire building styles can be altered and transferred from one building to another in a few operations, enhancing the ability to modify an existing architectural structure or to visualize a novel building in the style of others.
|
Build-by-Number: Rearranging the Real World to Visualize Novel Architectural Spaces
Abstract. We present Build-by-Number, a technique for quickly designing architectural structures that can be rendered photorealistically at interactive rates. We combine image-based capturing and rendering with procedural modeling techniques to allow the creation of novel structures in the style of real-world structures. Starting with a simple model recovered from a sparse image set, the model is divided into feature regions, such as doorways, windows, and brick. These feature regions essentially comprise a mapping from model space to image space, and can be recombined to texture a novel model. Procedural rules for the growth and reorganization of the model are automatically derived to allow for very fast editing and design. Further, the redundancies marked by the feature labeling can be used to perform automatic occlusion replacement and color equalization in the finished scene, which is rendered using view-dependent texture mapping on standard graphics hardware. Results using four captured scenes show that a great variety of novel structures can be created very quickly once a captured scene is available, and rendered with a degree of realism comparable to the original scene.
|