Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A question about how to get the iso value in V1 #303

Open
RainSureZhao opened this issue Aug 24, 2024 · 5 comments
Open

A question about how to get the iso value in V1 #303

RainSureZhao opened this issue Aug 24, 2024 · 5 comments

Comments

@RainSureZhao
Copy link

In the V1 version of the code, iso-value is calculated using the Octree::GetIsoValue() function, but the internal implementation of this function looks different from the original paper (2006); I don't quite understand the meaning of valueTable in the code (fData.setValueTables(fData.VALUE_FLAG,0)), and the meaning of the centerWeightContribution of each octree node at this time, which is calculated from the normal vector; In the original paper (2006), every point in the original point cloud is substituted into the octree to calculate a field value, and then calculate the average calculation. Why not adopt this method here? Are there any disadvantages?

    template<int Degree>
    Real Octree<Degree>::GetIsoValue(){
        const TreeOctNode* temp;
        Real isoValue, weightSum, w;
        Real myRadius;
        PointIndexValueFunction cf{};
        Point3D<Real> center;
        Real width;

        fData.setValueTables(fData.VALUE_FLAG,0);
        cf.valueTables = fData.valueTables;
        cf.res2 = fData.res2;
        myRadius = radius;
        isoValue = weightSum = 0;
        temp = tree.nextNode();
        while(temp){
            w = temp->nodeData.centerWeightContribution;
            if(w > EPSILON){
                cf.value = 0;
                VertexData::CenterIndex(temp,fData.depth,cf.index);
                temp->centerAndWidth(center,width);
                TreeOctNode::ProcessPointAdjacentNodes(center, &tree, myRadius, &cf);
                isoValue += cf.value * w;
                weightSum += w;
            }
            temp = tree.nextNode(temp);
        }
        return isoValue / weightSum;
    }
@RainSureZhao RainSureZhao changed the title A question about compute iso value in V1 A question about how to get the iso value in V1 Aug 24, 2024
@mkazhdan
Copy link
Owner

It's been a while since I've looked at the V1 version. But it could be that since we just want an averaged value of the function's values at the sample positions, the code approximates this by tracking the average of the sample positions within each cell and evaluating the average of the values at these averaged positions.

That would be faster, and would not require storing the point-set itself in memory.

@RainSureZhao
Copy link
Author

It's been a while since I've looked at the V1 version. But it could be that since we just want an averaged value of the function's values at the sample positions, the code approximates this by tracking the average of the sample positions within each cell and evaluating the average of the values at these averaged positions.

That would be faster, and would not require storing the point-set itself in memory.

Thank you for your detailed explanation! Your insights on the rationale behind the averaging method in the V1 version make a lot of sense, especially considering the balance between computational efficiency and memory usage. I appreciate you taking the time to clarify this, even though it's been a while since you last looked at the V1 code. Your response has helped me understand the approach better. Thanks again for your help!

@RainSureZhao
Copy link
Author

It's been a while since I've looked at the V1 version. But it could be that since we just want an averaged value of the function's values at the sample positions, the code approximates this by tracking the average of the sample positions within each cell and evaluating the average of the values at these averaged positions.

That would be faster, and would not require storing the point-set itself in memory.

Thank you for your previous explanation! I have a follow-up question regarding the setValueTables function in the V1 code. I'm having trouble understanding the calculation of x with the expression double(j) / (res2 - 1) and how it is used in the function to fill the value tables.

Could you please clarify the purpose of this expression and the reasoning behind using it in the context of the function? Specifically, I'm curious about what the division by (res2 - 1) represents and how it affects the results when populating the valueTables and dValueTables.

Thank you again for your help!

template<int Degree,class Real>
    void FunctionData<Degree,Real>::setValueTables(const int& flags, const double& smooth){
        clearValueTables();
        if(flags & VALUE_FLAG){
            valueTables = std::make_shared<std::vector<double>>(res * res2, 0.0);
        }
        if(flags & D_VALUE_FLAG){
            dValueTables = std::make_shared<std::vector<double>>(res * res2, 0.0);
        }
        PPolynomial<Degree+1> function;
        PPolynomial<Degree>  dFunction;
        for(int i = 0; i < res; i ++){
            if(smooth > 0){
                function = baseFunctions[i].MovingAverage(smooth);
                dFunction = baseFunctions[i].derivative().MovingAverage(smooth);
            }
            else{
                function = baseFunctions[i];
                dFunction = baseFunctions[i].derivative();
            }
            for(int j = 0; j < res2; j ++){
                double x = double(j) / (res2 - 1);
                if(flags & VALUE_FLAG){
                    (*valueTables)[i * res2 + j] = function(x);
                }
                if(flags & D_VALUE_FLAG){
                    (*dValueTables)[i * res2 + j] = dFunction(x);
                }
            }
        }
    }

@mkazhdan
Copy link
Owner

Again, it's been a whilensince I've looked at this version, but it looks like it's evaluating the (1D) kernel at regular positions on the unit interval.

The point being that since the 3D kernel is the tensor-product of 1D kernels, getting the values and gradients of the kernel at points in a regular 3D grid, just requires knowing the values of the 1D kernel on a regular 1D grid. The tables are storing those (1D) values.

@RainSureZhao
Copy link
Author

Again, it's been a whilensince I've looked at this version, but it looks like it's evaluating the (1D) kernel at regular positions on the unit interval.

The point being that since the 3D kernel is the tensor-product of 1D kernels, getting the values and gradients of the kernel at points in a regular 3D grid, just requires knowing the values of the 1D kernel on a regular 1D grid. The tables are storing those (1D) values.

Thank you so much for your explanation!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants