-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mesh optimisation #599
Comments
Initial tests with MeshLab MeshLab is an application that many computer graphics researchers use to demonstrate new algorithms, by implementing them as plugins. It has a few simplification algorithms that can be run immediately, so a good first step is to put the problematic meshes through this. Procedure For each of the files above, examples of the nominated elements were found in Navisworks. These were isolated by hiding all other elements. They were moved to the origin (by using the Units and Transform... dialog, or the Item Tools -> Move option, depending on how large the actual models were), to avoid vertex quantisation on export. Finally they were exported to FBX. Two applicable algorithms in MeshLab include Clustering and Quadratic Edge Collapse (QC). QC is the more advanced algorithm. At each step it looks for a viable pair of vertices and collapses them. Clustering collapses vertices by discretising them to a grid. Initial tests showed QC outperformed clustering by a fair margin. This is expected as Clustering would rely a lot on meshes being already very well tessellated to work. Detailed response of QC For each element, the mesh was imported and a new layer was created for each operation (i.e. meshes were not iteratively subdivided). Piles Vertex Position Optimisation and Boundary Preservation was Disabled. The control variable is the number of triangles. As can be seen QC can reduce the mesh by a factor of x100 before gaps start to show. Wire Fence This case shows the difficulty of controlling the simplification, as the 10% reduction that was trivial for the Piles has very salient impacts on the fence. Sound Barrier The sound barrier already has a low triangle count, and is further complicated by the fact each beam is modelled as a separate entity. There is very little scope for simplification here. Steel Framing An I-Beam from the steel frame model shows hardly any visible difference with significant reductions in face count. Pipe Insulation The pipe insulation from the IFC is well behaved during simplification, though the original mesh is relatively lightweight compared to some of the others. Remarks QC is prevalent, partly because it is really not one algorithm, but the first use of iterative pair contraction, which underpins a number of implementations. (QM is also still one of the best metrics, see below.) Pair contraction combines two vertices, and deleting one, at each step. As the algorithm proceeds through multiple steps the mesh is refined until some simplification goal is met. Which pairs are chosen to be contracted, how the mesh error is measured (contraction cost), and how the simplification goals are defined can all be modified. For example, works (not relevant to us) have modified the algorithm to account for surface attributes, changed the collapsed primitive, or even used view-dependent cost functions. Cost Function (Error Quadrics) The heuristic used to compute the error in the tests above (QM) is based on considering each triangle that meets a vertex to be a plane. The error is the sum of distances of that vertex to all the planes. To begin with, since all the planes intersect the vertex, the error is zero. As the vertex moves, the error will increase. Relevant recent works have extended this. For example, adding a curvature measure to the cost function (1, 2) In a study from 2006, 12 different metrics were compared and QM was shown to still be the best option 10 years on (it had the lowest RMSE, to start with, and maintained as face count decreased, and one of the lowest Hausdorff distances, with the differences to lower ones being very slight). Next Steps While there may be some slight performance gains from applying tweaked metrics, far more important is defining the simplification goal. As iterative contraction methods have a working mesh at each stage, the goal is usually given as a particular level of detail (number of triangles). This is not straightforward for our models as triangle budgets depend on many things, including the size of federations models may end up in, which are unknown at import time. As can be seen in the images above, in most cases significant numbers of triangles can be removed, but meshes suddenly decrease in quality once the total trianlge count drops below a threshold (that is unfortunately not obviously consistent between meshes). If we could find something to link threshold to, that could allow us to control the algorithm based on triangle count. Another possibly better solution is for us to be able to use the expected errors directly, or to find another quick metric (perhaps global) that could stop simplification when a mesh deviated from an ideal. |
Update from sprint: We should measure efficiency of mesh in vertices by bounding box volume. This will be selected by the user on upload, either via a Low, Medium, High selection or a direct value. This requires us to understand the size of the bounding box so we need to extract the units of measure from the file |
Some notes... Units At present, the units extracted from the files will only be used to control the decimation, which only happens on import. For the units system (i.e. working out factors), Boost has a units system, though it is compile time only. Therefore the units should be stored for now as a factor (e.g. for converting to meters). Currently there is no concept of a model in the database. In the future it could be valuable to store units, for when we work on coordinate systems, and possibly support mixing meshes of different units. However as the necessary functionality is still to be defined, the scale factor could go on Setting Units There are eight locations
Of these, the top four methods actually involve creating meshes. IFC
BIM
Assimp
ODA
Performing Mesh Decimation The mesh decimation should occur on the Default graph. This is because it will reduce the amount of geometry stored in the database. There is no mechanism (exposed to the user) to re-stash a model with updated parameters. The model importer abstraction The The best place to perform the optimisation is in |
The quadric edge-collapse method introduced by Garland is still a very competitive method for simplification. It is possible to re-implement this from scratch as all the algorithms/structures are well known. However to save time, a number of libraries can be repurposed. Below is an overview of the libraries, in order of preference. This is the library that underpins MeshLab, the tool in which the tests above were performed. Focuses on triangle meshes and implements a suite of mesh processing abilities, including QE simplification.
Polygon Mesh Processing Library A modern library for processing surface meshes, including mesh representation, decimation and remeshing. PMP is created by former maintainers of OpenMesh (see below).
The Computational Geometry Algorithms Library (CGAL) is a software project that provides access to efficient and reliable geometric algorithms. The CGAL is split into packages, including the relevant Triangulated Surface Mesh Simplification package. This library is fairly large with significant dependencies, however the simplification package has the most functionality of any, including different metrics to try if we get into testing and find that that we want to try different algorithms.
Summary Overall, CGAL seems to be the 'best' library (meaning most features, complete documentation, stable code), however the dependencies are non-trivial given that we will only use it for this one step. Further, if there are specific issues with the decimation it is possible to implement fixes in the other libraries based on the literature. There is also nothing stopping us from integrating a simpler library, then moving to GCAL if we feel we need its features or decide that we want to do more processing in bouncer. VCGLib has more issues (on its Github page) that PMP, but it is also a much longer lived project with more users to find problems. We have tested real examples of meshes we wish to process in VCGLib too (via MeshLab) and found it works with the correct settings. A library for representing mesh data in a form that makes manipulation straightforward. The documentation includes examples of mesh decimation, though even 7 years ago there were discussions about making this sample more useful. The documentation does seem quite sparse. (Though there is activity on the repo, it seems its spiritual successor PMP is likely to be more tractable.) C++. 3-Clause BSD. Other Mentions C++ library for mesh processing, but focusing on mesh representation rather than algorithms (doesn't have edge-collapse module, for example.) C++ library for mesh processing, but in this case mesh processing means deformations under constraints, rather than simplification. C++ library focusing on processing of abstract surface & volume elements. |
Detail Metric The first version of the parameterisation will be controlled by a user provided metric, Detail, which is defined as the Number of Vertices per m^3. To find some potentially suitable guidelines for this value, we can compare the distribution of this metric for four problematic models, with a representative model (rac_sample_house). The distributions generally have very long tails, including for the sample house. The axes lines show the standard deviations of each plot. Some notes on the individual files:
In all cases the outliers are 'real' in the sense that there are highly detailed elements - the high Detail metrics are not due to, for example, 2D items dividing by zero. Based on the above we could consider looking at the range around 2000 vertices/m^3. As the decimation algorithm takes a target number of vertices, we can model the effect on model size of arbitrary Detail settings for each model above. The x-axis the Detail, in N/m^3, and the y-axis is the ratio by which the model would be reduced. Axes lines indicate the 1x and 10x reduction levels. From this plot we can see that there is a well defined knee for most models. We also see that a 10x model reduction is achieved for the responsive models at just under 1000. How well behaved the metric is as a control variable is mixed; on one hand the rac model is unresponsive, as indicated by the very flat plot, which is good as this model is representative of a 'good' one, but neither is the problematic P01 model responsive. |
…g files array, and added new method to allow updating geometry
… and new simplification optimiser that passes through meshnodes
Review (in progress) The head of 599 has been updated with a new optimiser, based on PMP. This required a number of pre-processing stages be added to the library that were unexpected, due to the topology not being connected as much as possible and so causing suboptimal performance of the decimation. At this stage we can review the performance of the decimation with regards to a number of models. The head has two configuration parameters, Quality and Vertex. Vertex is required to prevent simplification of very simple models (imagine a single cube spread over 100m2). Below is an example of a test configuration. (The only way to specify simplification parameters is via the Json config - there are no CLI arguments for them.)
For the following tests Quality has been set as 1000 and Vertex to 100. The builds were RelWithDebug with optimisations on, but the imports were run under the debugger. In addition to bouncer, the fork of PMP has been updated with a new sample to explore the behaviour of the pre-processed decimation, for us to compare the theoretical quality of the PMP implementation, with what we see in the viewer. The current bouncer takes 532 ms to simplify the rac_basic_sample_house from 277011 to 196179 vertices (1.41x). The test settings (1000,100) should ideally not affect the model, though in practice a couple of meshes are above this threshold. For the most part the model is unchanged, as expected, but in the fine detail meshes there are a number of failures. It takes 782772 ms (13 minutes) to simplify 5213850-ATK-01-XX-M3DM-C-300084 from 71693630 to 4603591 vertices (15.5x). (The typical reduction is from 37150 to 2173, per pile). In this model, the quality is as good as expected in the feasibility investigation. The downside is that it is actually possible to reduce further, to say 800-900 vertices per file (another 2x reduction). But the simplistic control parameters do not allow basing this on the mesh. It takes 266167 ms (~5 minutes) to reduce BIM_ProjectCoordination_MEP2 from 59606407 to 13873380 vertices (4.3x). For this model, the NWC (not IFC) was used for the import. The typical mesh in this model is 100-1000 vertices and is reduced to 100 vertices. This model fails at least one element with an exception "Exception SurfaceMesh::add_face: Complex edge. simplifying mesh 5755b597-b4a2-48ee-a06e-0b7d2a5fa725". This exception occurs when the PMP representation is being built (i.e. before the decimation). The complex edge message means there are three coincident edges. From those that are decimated, some parts of the model look clean but the decimation fails saliently in others. This model is particularly challenging because individual elements are already quite small - it is sheer number of them (~80,000) that create the problem. It takes 223450 ms (3.7 minutes) to reduce B028047-TTE-01-ZZ-CR-Z-9001_P01 from 107126609 to 67617345 vertices (1.5x). The mesh sizes are more broadly distributed, though there are still many small meshes. The quality is mixed with some parts decimating well, but still areas of MEP models and highly detailed meshes such as light fittings with salient errors. Sample The merge sample in the pmp fork has been updated with the exact bouncer code used to perform the decimation. Additionally the pmp fork has been adapted to allow compilation on gcc 9.8 (c++14) including export. This means the exact representation of the mesh from bouncer can be intercepted and exported for experimentation in the sample, and have the exact same algorithms run on it. This is how we can determine lower thresholds for the theoretical quality of individual elements. |
closing due to abandonment 😢 |
Description
We have seen a number of models from customers which have a high number of polygons, often on elements which do not warrant such a high number eg. pipe insulation or steel beams.
We will look to have an optional flag at upload which allows the bouncer to reduce the number of polygons on objects that do not warrant them, thus improving performance in the viewer
Goals
Tasks
ImportFromFile
default
scene with updated content but leaving everything else intactPMP specifics
Related Resources
Models:
The text was updated successfully, but these errors were encountered: