筆者介紹:姜雪偉,IT公司技術(shù)合伙人,IT高級(jí)講師,CSDN社區(qū)專(zhuān)家,特邀編輯,暢銷(xiāo)書(shū)作者,國(guó)家專(zhuān)利發(fā)明人;已出版書(shū)籍:《手把手教你架構(gòu)3D游戲引擎》電子工業(yè)出版社和《Unity3D實(shí)戰(zhàn)核心技術(shù)詳解》電子工業(yè)出版社等。
CSDN課程視頻網(wǎng)址:http://edu.csdn.net/lecturer/144
深度測(cè)試在游戲引擎中使用的非常多,比如在Unity3D引擎中,在UI之間的遮擋設(shè)置可以通過(guò)其深度值進(jìn)行設(shè)置,在3D場(chǎng)景中也會(huì)根據(jù)其Z值進(jìn)行設(shè)置其前后關(guān)系。這些都會(huì)用到深度測(cè)試技術(shù),由于Unity3D引擎是跨平臺(tái)的,它在移動(dòng)端的渲染使用的是OpenGL,所以后面的系列文章就介紹關(guān)于OpenGL使用的核心技術(shù)。
深度測(cè)試需要一個(gè)存儲(chǔ)區(qū)域,這個(gè)存儲(chǔ)區(qū)域就稱(chēng)為深度緩沖,它就像顏色緩沖用于存儲(chǔ)所有的片段顏色,深度緩沖和顏色緩沖區(qū)有相同的寬度和高度。深度緩沖由窗口系統(tǒng)自動(dòng)創(chuàng)建并將其深度值存儲(chǔ)為 16、 24 或 32 位浮點(diǎn)數(shù),在大多數(shù)系統(tǒng)中深度緩沖區(qū)為24位。
使用深度測(cè)試就必須要啟用它,當(dāng)深度測(cè)試啟用的時(shí)候, OpenGL 測(cè)試深度緩沖區(qū)內(nèi)的深度值。OpenGL 執(zhí)行深度測(cè)試的時(shí)候,如果此測(cè)試通過(guò),深度緩沖內(nèi)的值將被設(shè)為新的深度值。如果深度測(cè)試失敗,則丟棄該片段。深度測(cè)試在片段著色器運(yùn)行之后,它是在屏幕空間中執(zhí)行的。屏幕空間坐標(biāo)直接有關(guān)的視區(qū),由OpenGL的glViewport函數(shù)給定,并且可以通過(guò)GLSL的片段著色器中內(nèi)置的 gl_FragCoord變量訪問(wèn)。gl_FragCoord 的 X 和 y 表示該片段的屏幕空間坐標(biāo) ((0,0) 在左下角)。gl_FragCoord 還包含一個(gè) z 坐標(biāo),它包含了片段的實(shí)際深度值。此 z 坐標(biāo)值是與深度緩沖區(qū)的內(nèi)容進(jìn)行比較的值。
深度測(cè)試默認(rèn)是關(guān)閉的,要啟用深度測(cè)試的話,我們需要用GL_DEPTH_TEST選項(xiàng)來(lái)打開(kāi)它,函數(shù)如下:
glEnable(GL_DEPTH_TEST); 一旦啟用深度測(cè)試,如果片段通過(guò)深度測(cè)試,OpenGL自動(dòng)在深度緩沖區(qū)存儲(chǔ)片段的 z 值,如果深度測(cè)試失敗,那么相應(yīng)地丟棄該片段。如果啟用深度測(cè)試,那么在每個(gè)渲染之前還應(yīng)使用GL_DEPTH_BUFFER_BIT清除深度緩沖區(qū),否則深度緩沖區(qū)將保留上一次進(jìn)行深度測(cè)試時(shí)所寫(xiě)的深度值。函數(shù)如下:glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);有時(shí),我們也不需要深度測(cè)試,那就關(guān)閉它,就是禁止緩沖區(qū)寫(xiě)入,函數(shù)如下:glDepthMask(GL_FALSE);在OpenGL中經(jīng)常使用的深度測(cè)試函數(shù)為可以通過(guò)調(diào)用glDepthFunc它包含很多參數(shù):
GL_ALWAYS 永遠(yuǎn)通過(guò)測(cè)試GL_NEVER 永遠(yuǎn)不通過(guò)測(cè)試GL_LESS 在片段深度值小于緩沖區(qū)的深度時(shí)通過(guò)測(cè)試GL_EQUAL 在片段深度值等于緩沖區(qū)的深度時(shí)通過(guò)測(cè)試GL_LEQUAL 在片段深度值小于等于緩沖區(qū)的深度時(shí)通過(guò)測(cè)試GL_GREATER 在片段深度值大于緩沖區(qū)的深度時(shí)通過(guò)測(cè)試GL_NOTEQUAL 在片段深度值不等于緩沖區(qū)的深度時(shí)通過(guò)測(cè)試GL_GEQUAL 在片段深度值大于等于緩沖區(qū)的深度時(shí)通過(guò)測(cè)試其中默認(rèn)使用GL_LESS,它的含義是這將丟棄深度值高于或等于當(dāng)前深度緩沖區(qū)的值的片段。接下來(lái)仔介紹深度值的計(jì)算,深度緩沖主要是通過(guò)計(jì)算深度值來(lái)比較大小,在深度緩沖區(qū)中包含深度值介于
0.0和1.0之間,從觀察者看到其內(nèi)容與場(chǎng)景中的所有對(duì)象的 z 值進(jìn)行了比較。這些視圖空間中的 z 值可以在投影平頭截體的近平面和遠(yuǎn)平面之間的任何值。我們因此需要一些方法來(lái)轉(zhuǎn)換這些視圖空間 z 值到 [0,1] 的范圍內(nèi),下面的 (線性) 方程把 z 值轉(zhuǎn)換為 0.0 和 1.0 之間的值 :
這里far和near是我們用來(lái)提供到投影矩陣設(shè)置可見(jiàn)視圖截錐的遠(yuǎn)近值。關(guān)于這方面技術(shù)的講解,可以查看我之前寫(xiě)的博客,方程帶內(nèi)錐截體的深度值 z,并將其轉(zhuǎn)換到 [0,1] 范圍。在下面的圖給出 z 值和其相應(yīng)的深度值的關(guān)系:
然而,在實(shí)踐中是幾乎從來(lái)不使用這樣的線性深度緩沖區(qū)。正確的投影特性的非線性深度方程是和1/z成正比的 ,由于非線性函數(shù)是和 1/z 成正比,例如1.0 和 2.0 之間的 z 值,將變?yōu)?1.0 到 0.5之間, 這樣在z非常小的時(shí)候給了我們很高的精度。方程如下所示:
要記住的重要一點(diǎn)是在深度緩沖區(qū)的值不是線性的屏幕空間 (它們?cè)谝晥D空間投影矩陣應(yīng)用之前是線性)。值為 0.5 在深度緩沖區(qū)并不意味著該對(duì)象的 z 值是投影平頭截體的中間;頂點(diǎn)的 z 值是實(shí)際上相當(dāng)接近近平面!你可以看到 z 值和產(chǎn)生深度緩沖區(qū)的值在下列圖中的非線性關(guān)系:
正如你所看到,一個(gè)附近的物體的小的 z 值因此給了我們很高的深度精度。變換 (從觀察者的角度) 的 z 值的方程式被嵌入在投影矩陣,所以當(dāng)我們變換頂點(diǎn)坐標(biāo)從視圖到裁剪,然后到非線性方程應(yīng)用了的屏幕空間中。關(guān)于投影矩陣,讀者可以查看之前寫(xiě)的博客。
屏幕空間的深度值是非線性的,這個(gè)讀者可以自己測(cè)試,在這里我把原理說(shuō)一下:屏幕空間的深度值是非線性如他們?cè)趜很小的時(shí)候有很高的精度,較大的 z 值有較低的精度。該片段的深度值會(huì)迅速增加,所以幾乎所有頂點(diǎn)的深度值接近 1.0。如果我們小心的靠近物體,你最終可能會(huì)看到的色彩越來(lái)越暗,意味著它們的 z 值越來(lái)越小,這清楚地表明深度值的非線性特性。近的物體相對(duì)遠(yuǎn)的物體對(duì)的深度值比對(duì)象較大的影響。只移動(dòng)幾英寸就能讓暗色完全變亮。
但是我們可以讓深度值變換回線性。要實(shí)現(xiàn)這一目標(biāo)我們需要讓點(diǎn)應(yīng)用投影變換逆的逆變換,成為單獨(dú)的深度值的過(guò)程。這意味著我們必須首先重新變換范圍 [0,1] 中的深度值為單位化的設(shè)備坐標(biāo)(normalized device coordinates)范圍內(nèi) [-1,1] (裁剪空間(clip space))。然后,我們想要反轉(zhuǎn)非線性方程 :
就像在投影矩陣做的那樣并將此反轉(zhuǎn)方程應(yīng)用于所得到的深度值。然后,結(jié)果是一個(gè)線性的深度值。
首先,我們需要 NDC 深度值轉(zhuǎn)換:
float z = depth * 2.0 - 1.0;然后把我們所得到的 z 值應(yīng)用逆轉(zhuǎn)換來(lái)檢索的線性深度值:float linearDepth = (2.0 * near) / (far + near - z * (far - near));注意在轉(zhuǎn)換上述方程是并不是從方程:
精確的逆方程,這個(gè)方程從投影矩陣中導(dǎo)出,可以從新使用等式2將他轉(zhuǎn)換為非線性深度值。這個(gè)方程也會(huì)考慮使用[0,1] 而不是 [near,far]范圍內(nèi)的 z 值 。
這不是從投影矩陣推導(dǎo)出的準(zhǔn)確公式;這個(gè)方程是除以far的結(jié)果。深度值的范圍一直到far,這作為一個(gè)介于 0.0 和 1.0 之間的顏色值并不合適。除以far的值把深度值映射到介于 0.0 和 1.0,更適合用于演示目的。
這個(gè)能夠?qū)⑵聊豢臻g的非線性深度值轉(zhuǎn)變?yōu)榫€性深度值的完整的片段著色器如下所示:
#version 330 coreout vec4 color;float LinearizeDepth(float depth){ float near = 0.1; float far = 100.0; float z = depth * 2.0 - 1.0; // Back to NDC return (2.0 * near) / (far + near - z * (far - near));}void main(){ float depth = LinearizeDepth(gl_FragCoord.z); color = vec4(vec3(depth), 1.0f);}接下來(lái)介紹在程序中經(jīng)常遇到的問(wèn)題,深度沖突,就是在物體前后關(guān)系設(shè)置時(shí)會(huì)出現(xiàn)重疊現(xiàn)象,導(dǎo)致前后不分,在引擎中通常的做法有幾種:1、也是最重要的技巧是讓物體之間不要離得太近,以至于他們的三角形重疊。
2、是盡可能把近平面設(shè)置得遠(yuǎn)一些。
3、是放棄一些性能來(lái)得到更高的深度值的精度。大多數(shù)的深度緩沖區(qū)都是24位。但現(xiàn)在顯卡支持32位深度值,這讓深度緩沖區(qū)的精度提高了一大節(jié)。所以犧牲一些性能你會(huì)得到更精確的深度測(cè)試,減少深度沖突。
最后把實(shí)現(xiàn)深度測(cè)試的Shader代碼給讀者展示一下:
首先展示的是片段著色器:
#version 330 coreout vec4 color;float near = 1.0; float far = 100.0; float LinearizeDepth(float depth) { float z = depth * 2.0 - 1.0; // Back to NDC return (2.0 * near * far) / (far + near - z * (far - near)); }void main(){ float depth = LinearizeDepth(gl_FragCoord.z) / far; // divide by far to get depth in range [0,1] for visualization purposes. color = vec4(vec3(depth), 1.0f);}其次展示的是頂點(diǎn)著色器:#version 330 corelayout (location = 0) in vec3 position;layout (location = 1) in vec2 texCoords;out vec2 TexCoords;uniform mat4 model;uniform mat4 view;uniform mat4 PRojection;void main(){ gl_Position = projection * view * model * vec4(position, 1.0f); TexCoords = texCoords;}下面就是處理Shader腳本的C++代碼,在這里把核心代碼接口展示一下,代碼如下所示:// 定義視口大小 glViewport(0, 0, screenWidth, screenHeight); // 啟用深度測(cè)試 glEnable(GL_DEPTH_TEST); // glDepthFunc(GL_ALWAYS); // Set to always pass the depth test (same effect as glDisable(GL_DEPTH_TEST)) // 編譯shader腳本 Shader shader("depth_testing.vs", "depth_testing.frag");在后面就是對(duì)Shader進(jìn)行傳值操作:// 清空緩存 glClearColor(0.1f, 0.1f, 0.1f, 1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // 繪制物體以及傳值 shader.Use(); glm::mat4 model; glm::mat4 view = camera.GetViewMatrix(); glm::mat4 projection = glm::perspective(camera.Zoom, (float)screenWidth/(float)screenHeight, 0.1f, 100.0f); glUniformMatrix4fv(glGetUniformLocation(shader.Program, "view"), 1, GL_FALSE, glm::value_ptr(view)); glUniformMatrix4fv(glGetUniformLocation(shader.Program, "projection"), 1, GL_FALSE, glm::value_ptr(projection));在這里把Shader類(lèi)代碼展示如下:#ifndef SHADER_H#define SHADER_H#include <GL/glew.h>#include <string>#include <fstream>#include <sstream>#include <iostream>class Shader{public: GLuint Program; // Constructor generates the shader on the fly Shader(const GLchar* vertexPath, const GLchar* fragmentPath, const GLchar* geometryPath = nullptr) { // 1. Retrieve the vertex/fragment source code from filePath std::string vertexCode; std::string fragmentCode; std::string geometryCode; std::ifstream vShaderFile; std::ifstream fShaderFile; std::ifstream gShaderFile; // ensures ifstream objects can throw exceptions: vShaderFile.exceptions (std::ifstream::failbit | std::ifstream::badbit); fShaderFile.exceptions (std::ifstream::failbit | std::ifstream::badbit); gShaderFile.exceptions (std::ifstream::failbit | std::ifstream::badbit); try { // Open files vShaderFile.open(vertexPath); fShaderFile.open(fragmentPath); std::stringstream vShaderStream, fShaderStream; // Read file's buffer contents into streams vShaderStream << vShaderFile.rdbuf(); fShaderStream << fShaderFile.rdbuf(); // close file handlers vShaderFile.close(); fShaderFile.close(); // Convert stream into string vertexCode = vShaderStream.str(); fragmentCode = fShaderStream.str(); // If geometry shader path is present, also load a geometry shader if(geometryPath != nullptr) { gShaderFile.open(geometryPath); std::stringstream gShaderStream; gShaderStream << gShaderFile.rdbuf(); gShaderFile.close(); geometryCode = gShaderStream.str(); } } catch (std::ifstream::failure e) { std::cout << "ERROR::SHADER::FILE_NOT_SUCCESFULLY_READ" << std::endl; } const GLchar* vShaderCode = vertexCode.c_str(); const GLchar * fShaderCode = fragmentCode.c_str(); // 2. Compile shaders GLuint vertex, fragment; GLint success; GLchar infoLog[512]; // Vertex Shader vertex = glCreateShader(GL_VERTEX_SHADER); glShaderSource(vertex, 1, &vShaderCode, NULL); glCompileShader(vertex); checkCompileErrors(vertex, "VERTEX"); // Fragment Shader fragment = glCreateShader(GL_FRAGMENT_SHADER); glShaderSource(fragment, 1, &fShaderCode, NULL); glCompileShader(fragment); checkCompileErrors(fragment, "FRAGMENT"); // If geometry shader is given, compile geometry shader GLuint geometry; if(geometryPath != nullptr) { const GLchar * gShaderCode = geometryCode.c_str(); geometry = glCreateShader(GL_GEOMETRY_SHADER); glShaderSource(geometry, 1, &gShaderCode, NULL); glCompileShader(geometry); checkCompileErrors(geometry, "GEOMETRY"); } // Shader Program this->Program = glCreateProgram(); glAttachShader(this->Program, vertex); glAttachShader(this->Program, fragment); if(geometryPath != nullptr) glAttachShader(this->Program, geometry); glLinkProgram(this->Program); checkCompileErrors(this->Program, "PROGRAM"); // Delete the shaders as they're linked into our program now and no longer necessery glDeleteShader(vertex); glDeleteShader(fragment); if(geometryPath != nullptr) glDeleteShader(geometry); } // Uses the current shader void Use() { glUseProgram(this->Program); }private: void checkCompileErrors(GLuint shader, std::string type) { GLint success; GLchar infoLog[1024]; if(type != "PROGRAM") { glGetShaderiv(shader, GL_COMPILE_STATUS, &success); if(!success) { glGetShaderInfoLog(shader, 1024, NULL, infoLog); std::cout << "| ERROR::::SHADER-COMPILATION-ERROR of type: " << type << "|/n" << infoLog << "/n| -- --------------------------------------------------- -- |" << std::endl; } } else { glGetProgramiv(shader, GL_LINK_STATUS, &success); if(!success) { glGetProgramInfoLog(shader, 1024, NULL, infoLog); std::cout << "| ERROR::::PROGRAM-LINKING-ERROR of type: " << type << "|/n" << infoLog << "/n| -- --------------------------------------------------- -- |" << std::endl; } } }};#endif另外在處理時(shí)需要用到攝像機(jī)代碼,完整代碼如下所示:#pragma once// Std. Includes#include <vector>// GL Includes#include <GL/glew.h>#include <glm/glm.hpp>#include <glm/gtc/matrix_transform.hpp>// Defines several possible options for camera movement. Used as abstraction to stay away from window-system specific input methodsenum Camera_Movement { FORWARD, BACKWARD, LEFT, RIGHT};// Default camera valuesconst GLfloat YAW = -90.0f;const GLfloat PITCH = 0.0f;const GLfloat SPEED = 3.0f;const GLfloat SENSITIVTY = 0.25f;const GLfloat ZOOM = 45.0f;// An abstract camera class that processes input and calculates the corresponding Eular Angles, Vectors and Matrices for use in OpenGLclass Camera{public: // Camera Attributes glm::vec3 Position; glm::vec3 Front; glm::vec3 Up; glm::vec3 Right; glm::vec3 WorldUp; // Eular Angles GLfloat Yaw; GLfloat Pitch; // Camera options GLfloat MovementSpeed; GLfloat MouseSensitivity; GLfloat Zoom; // Constructor with vectors Camera(glm::vec3 position = glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3 up = glm::vec3(0.0f, 1.0f, 0.0f), GLfloat yaw = YAW, GLfloat pitch = PITCH) : Front(glm::vec3(0.0f, 0.0f, -1.0f)), MovementSpeed(SPEED), MouseSensitivity(SENSITIVTY), Zoom(ZOOM) { this->Position = position; this->WorldUp = up; this->Yaw = yaw; this->Pitch = pitch; this->updateCameraVectors(); } // Constructor with scalar values Camera(GLfloat posX, GLfloat posY, GLfloat posZ, GLfloat upX, GLfloat upY, GLfloat upZ, GLfloat yaw, GLfloat pitch) : Front(glm::vec3(0.0f, 0.0f, -1.0f)), MovementSpeed(SPEED), MouseSensitivity(SENSITIVTY), Zoom(ZOOM) { this->Position = glm::vec3(posX, posY, posZ); this->WorldUp = glm::vec3(upX, upY, upZ); this->Yaw = yaw; this->Pitch = pitch; this->updateCameraVectors(); } // Returns the view matrix calculated using Eular Angles and the LookAt Matrix glm::mat4 GetViewMatrix() { return glm::lookAt(this->Position, this->Position + this->Front, this->Up); } // Processes input received from any keyboard-like input system. Accepts input parameter in the form of camera defined ENUM (to abstract it from windowing systems) void ProcessKeyboard(Camera_Movement direction, GLfloat deltaTime) { GLfloat velocity = this->MovementSpeed * deltaTime; if (direction == FORWARD) this->Position += this->Front * velocity; if (direction == BACKWARD) this->Position -= this->Front * velocity; if (direction == LEFT) this->Position -= this->Right * velocity; if (direction == RIGHT) this->Position += this->Right * velocity; } // Processes input received from a mouse input system. Expects the offset value in both the x and y direction. void ProcessMouseMovement(GLfloat xoffset, GLfloat yoffset, GLboolean constrainPitch = true) { xoffset *= this->MouseSensitivity; yoffset *= this->MouseSensitivity; this->Yaw += xoffset; this->Pitch += yoffset; // Make sure that when pitch is out of bounds, screen doesn't get flipped if (constrainPitch) { if (this->Pitch > 89.0f) this->Pitch = 89.0f; if (this->Pitch < -89.0f) this->Pitch = -89.0f; } // Update Front, Right and Up Vectors using the updated Eular angles this->updateCameraVectors(); } // Processes input received from a mouse scroll-wheel event. Only requires input on the vertical wheel-axis void ProcessMouseScroll(GLfloat yoffset) { if (this->Zoom >= 1.0f && this->Zoom <= 45.0f) this->Zoom -= yoffset; if (this->Zoom <= 1.0f) this->Zoom = 1.0f; if (this->Zoom >= 45.0f) this->Zoom = 45.0f; }private: // Calculates the front vector from the Camera's (updated) Eular Angles void updateCameraVectors() { // Calculate the new Front vector glm::vec3 front; front.x = cos(glm::radians(this->Yaw)) * cos(glm::radians(this->Pitch)); front.y = sin(glm::radians(this->Pitch)); front.z = sin(glm::radians(this->Yaw)) * cos(glm::radians(this->Pitch)); this->Front = glm::normalize(front); // Also re-calculate the Right and Up vector this->Right = glm::normalize(glm::cross(this->Front, this->WorldUp)); // Normalize the vectors, because their length gets closer to 0 the more you look up or down which results in slower movement. this->Up = glm::normalize(glm::cross(this->Right, this->Front)); }};以上就是關(guān)于深度測(cè)試的介紹,供參考。。。。。。。
新聞熱點(diǎn)
疑難解答
圖片精選
網(wǎng)友關(guān)注