Skip to content

Nabla API displaying an image with texture in a pill

Przemog1 edited this page Nov 2, 2020 · 2 revisions

WARNING: since some engine changes there are bugs here - to fix

General overview

Prologue

Switching to an engine requires you to expand knowledge about it's operation and usage. Because of that we decided to provide a simple tutorial for general overview that doesn't get you into specific hardware details or low-level API implementation, but may help you understand a little bit how it looks like, how to use it to see anything on the screen and what you should study about the API to get started.

Nabla example

Initialization and basic stuff

Let's get started. We will start with including main general Nabla header and some useful headers as well.

#define _NBL_STATIC_LIB_
#include <iostream>
#include <cstdio>
#include <nabla.h>

#include "../common/QToQuitEventReceiver.h"
#include "nbl/asset/CGeometryCreator.h"

We will be using an extraordinary quit event receiver attached to event handler to provide quite simple ending application system after clicking "Q" key. We will also use default geometry creator to have an access to basic geometry creation when half filled pipeline's parameters are returned. Indeed we have to declare we will be using essential namespaces where entire engine's stuff is placed in.

using namespace nbl;
using namespace asset;
using namespace video;
using namespace core;

First of all you have to create a logical device - NablaDevice. A device provides many useful high-level objects. Thanks to them you can for instance manipulate a scene, manage your assets and handle file system or log useful messages. To create it you have to provide SNablaCreationParameters object holding some specific initialization information about driver being used, size of window, stencil buffer or depth buffer.

nbl::SNablaCreationParameters params;
params.Bits = 24;
params.ZBufferBits = 24; 
params.DriverType = video::EDT_OPENGL;
params.WindowSize = dimension2d<uint32_t>(1280, 720);
params.Fullscreen = false;
params.Vsync = true;
params.Doublebuffer = true;
params.Stencilbuffer = false;
auto device = createDeviceEx(params);

Once device is created, you can fetch a driver - IVideoDriver, that will be useful for following operations such as beginning a scene, attaching a frame buffer or creating GPU objects from CPU assets. We will also fetch a scene manager and asset manager for later purposes and define quit event receiver.

auto driver = device->getVideoDriver();
auto sceneManager = device->getSceneManager();
auto assetManager = device->getAssetManager();

QToQuitEventReceiver receiver;
device->setEventReceiver(&receiver);

Before we will move to texture creation, we will handle camera creation and it's proporties and define a geometry creator. It will be efficient while creating mesh buffer that will be rendering, it will be our rectangle covered with colors thanks to the texture we will load soon. You'd better be setting camera options as I did, otherwise you could see nothing because of rectangle orientation in space. Note that camera is handled by asset scene manager - it's of course to make the life easier.

scene::ICameraSceneNode* camera = sceneManager->addCameraSceneNodeFPS(0, 100.0f, 0.001f);

camera->setPosition(core::vector3df(-5, 0, 0));
camera->setTarget(core::vector3df(0, 0, 0));
camera->setNearValue(0.01f);
camera->setFarValue(1000.0f);

sceneManager->setActiveCamera(camera);

auto geometryCreator = device->getAssetManager()->getGeometryCreator();
auto rectangleGeometry = geometryCreator->createRectangleMesh(nbl::core::vector2df_SIMD(1.5, 3));

One more thing, we will define quickly simple metadata class that will be described later. Don't worry about it, just copy and paste right now somewhere in global space.

class ExampleMetadataPipeline final : public IPipelineMetadata
{
public:
	ExampleMetadataPipeline(core::smart_refctd_dynamic_array<ShaderInputSemantic>&& _inputs)
		: m_shaderInputs(std::move(_inputs)) {}

	core::SRange<const ShaderInputSemantic> getCommonRequiredInputs() const override { return { m_shaderInputs->begin(), m_shaderInputs->end() }; }

	_NBL_STATIC_INLINE_CONSTEXPR const char* fakeLoaderName = "EXAMPLE";
	const char* getLoaderName() const override { return fakeLoaderName; }

private:
	core::smart_refctd_dynamic_array<ShaderInputSemantic> m_shaderInputs;
};

Texture - image data

Our goal is simple - make an object to be displayed on screen with texture coverage. To do it, we have to get a bit tired with some object creation and perform some actions.

As a simple overview, we need:

  • a gpu image view
  • a gpu pipeline
  • one gpu descriptor set for UBO data for basic view parameters storage
  • one gpu descriptor set for image view storing our image with it's texel data

Creating those above obliagates you to create some another objects needed to move on further, because for example to create a gpu image view we will have to handle it's cpu version first, but to handle it we need to start with cpu image actually. So it's almost like that you need to provide another objects to finally gain those 3 useful ones, but it will be covered bellow.

Before we begin with stuff assocciated with pipelines and descriptor sets, we will handle image asset that will be holding our data and revelant image parameters. We will deal with it by using one of predefined loaders capable of loading some file formats. Loaders are created to reduce loading process of various data type to just a few function calls - they return a ready to use cpu asset. In our case we will use png loader which returns ICPUImage. Image isn't enough to deal will rendering, we need to provide it's view, but first things first.

asset::IAssetLoader::SAssetLoadParams loadingParams;
auto images_bundle = assetManager->getAsset("../../media/color_space_test/R8G8B8A8_1.png", loadingParams);
assert(!images_bundle.isEmpty());
auto image = images_bundle.getContents().first[0];
auto image_raw = static_cast<asset::ICPUImage*>(image.get());

Remember that loaders return a bundle. It's desired when your asset loader handles a file containing data you can get many assets of same type from. Because we are pretty sure that in our case only one asset will be returned, we fetch it by specifying 0 index in contents call. Note that we extra use .first at the call - that's because contents holds the beginning of assets bundle as first member and the end of assets bundle as second member.

Once image asset is ready, we can start creating it's view - ICPUImageView, to finally create IGPUImageView that is our first step to success. To create an image view, we need to provide it's creation parameters struct ICPUImageView::SCreationParams holding information about subresource range, type of a view and an image itself.

ICPUImageView::SCreationParams viewParams;
viewParams.flags = static_cast<ICPUImageView::E_CREATE_FLAGS>(0u);
viewParams.image = core::smart_refctd_ptr_static_cast<asset::ICPUImage>(image);
viewParams.format = asset::EF_R8G8B8A8_SRGB;
viewParams.viewType = IImageView<ICPUImage>::ET_2D;
viewParams.subresourceRange.baseArrayLayer = 0u;
viewParams.subresourceRange.layerCount = 1u;
viewParams.subresourceRange.baseMipLevel = 0u;
viewParams.subresourceRange.levelCount = 1u;

auto imageView = ICPUImageView::create(std::move(viewParams));
auto gpuImageView = driver->getGPUObjectsFromAssets(&imageView.get(), &imageView.get()+1u)->front();

Note we're using smart_refctd_ptr_static_cast - it's because returned image is a smart_refctd_ptr of interface type IAsset and viewParams.image needs more specialized type. We also specify basic parameters for subresource range, but since loaded image isn't complicated and doesn't have any layers nor mipmaps and is just a 2D image, we can simply leave it as above.

Shaders, pipeline and it's layout, descriptor set and it's layout

Once IGPUImageView is created, we have to provide a pipeline that will be used by our driver. We also need to create descriptor sets as well to be fully prepared for rendering.

We will create those assets by our own, but take into account that some of assets might be fetched from predefined ones which are provided, cached and ready to use by using asset manager's findAssets function - however we won't do that. First we fill focus on shaders. However we will fetch default ones to see how to use caching system.

So to use cache, you have to specify some inputs. The most important is a key used to track assets. You have to also specify a type and their amount. We need a vertex and fragment shader to cast anything on a display screen. Fortunately those ones are default cached and we can fetch them to a bundle. Note that they have been written for basic purposes and won't fit in every case.

constexpr std::string_view cacheKey = "nbl/builtin/materials/lambertian/singletexture/specializedshader";
const IAsset::E_TYPE types[] { IAsset::E_TYPE::ET_SPECIALIZED_SHADER, IAsset::E_TYPE::ET_SPECIALIZED_SHADER, static_cast<IAsset::E_TYPE>(0u) };

auto vertexShader = core::smart_refctd_ptr<ICPUSpecializedShader>();
auto fragmentShader = core::smart_refctd_ptr<ICPUSpecializedShader>();

auto bundle = assetManager->findAssets(cacheKey.data(), types);

auto refCountedBundle =
{
    core::smart_refctd_ptr_static_cast<ICPUSpecializedShader>(bundle->begin()->getContents().first[0]),
    core::smart_refctd_ptr_static_cast<ICPUSpecializedShader>((bundle->begin() + 1)->getContents().first[0])
};

for (auto& shader : refCountedBundle)
    if (shader->getStage() == ISpecializedShader::ESS_VERTEX)
        vertexShader = std::move(shader);
    else if(shader->getStage() == ISpecializedShader::ESS_FRAGMENT)
	fragmentShader = std::move(shader);

Nabla uses virtual caches, so should should start using it as well if it's possible. We have specified a virtual caching key that is assigned to those assets we are trying to fetch. Because it's a bundle, we have to track and distinguish appropiate shaders and move them to correct objects - to vertexShader and fragmentShader. To help an asset manager you have to tell him an amount of assets - that's why we created an array of asset types with termination casted to ET_SPECIALIZED_SHADER - it's required to get asset manager to know the size of assets you will be needing, so remember about terminating your array with asset types.

Now we are ready to deal with the hell. We will write a lambda taking returned value containing half filled pipeline from geometry creator as argument and returning useful objects we will be using for each rendering frame. Let's call it createAndGetUsefullData. Forgot about important thing, we will also define some variables holding bindings for descriptors and variable holding UBO size. So it should like following.

size_t ds0SamplerBinding = 0, ds1UboBinding = 0, neededDS1UBOsz = 0;
auto createAndGetUsefullData = [&](asset::IGeometryCreator::return_type& geometryObject)
{
    // all the code will be here
};

It's time to create ICPUDescriptorSetLayout::SBinding needed for descriptor set layout creation. We will need two descriptor sets, so two descriptor set layouts accordingly.

asset::ICPUDescriptorSetLayout::SBinding binding0;
binding0.binding = ds0SamplerBinding;
binding0.type = EDT_COMBINED_IMAGE_SAMPLER;
binding0.count = 1u;
binding0.stageFlags = static_cast<asset::ICPUSpecializedShader::E_SHADER_STAGE>(asset::ICPUSpecializedShader::ESS_FRAGMENT);
binding0.samplers = nullptr;	

asset::ICPUDescriptorSetLayout::SBinding binding1;
binding1.count = 1u;
binding1.binding = ds1UboBinding;
binding1.stageFlags = static_cast<asset::ICPUSpecializedShader::E_SHADER_STAGE>(asset::ICPUSpecializedShader::ESS_VERTEX | asset::ICPUSpecializedShader::ESS_FRAGMENT);
binding1.type = asset::EDT_UNIFORM_BUFFER;

We specify we will be using one image sampler attached to binding at 0 index in fragment shader in SBinding for descriptor set number 0. We will be also using UBO for basic view parameters - a struct in vertex shader holding some useful matrices such as MVP matrix or inverse of MVP matrix. Having done it we can create a descriptor set layouts which will be used for pipeline layout creation. Take a look, actually there is a rule you might have seen - once you've got an object layout, you can create it's completely. So let's do it.

auto ds0Layout = core::make_smart_refctd_ptr<asset::ICPUDescriptorSetLayout>(&binding0, &binding0 + 1);
auto ds1Layout = core::make_smart_refctd_ptr<asset::ICPUDescriptorSetLayout>(&binding1, &binding1 + 1);
auto pipelineLayout = core::make_smart_refctd_ptr<asset::ICPUPipelineLayout>(nullptr, nullptr, std::move(ds0Layout), std::move(ds1Layout), nullptr, nullptr);

auto rawds0 = pipelineLayout->getDescriptorSetLayout(0u);
auto rawds1 = pipelineLayout->getDescriptorSetLayout(1u);

Take a look how many nullptr has been put into function call. We have skipped first two parameters - push constant ranges since we won't be using it. We have also only specified two of 4 available descriptor layouts - it's enough for our destination.

As I wrote above about layout "rule", having pipeline layout created - we will create a pipeline holding many useful graphics parameters and stuff such as shaders, vertex input parameters, blending parameters, rasterization parameters or primitive assembly parameters and more. Well, because our descriptor set 1 destination is to make us happy with uniform buffer object - we need to provide some default settings for that.

constexpr size_t DS1_METADATA_ENTRY_CNT = 3ull;
core::smart_refctd_dynamic_array<IPipelineMetadata::ShaderInputSemantic> shaderInputsMetadata = core::make_refctd_dynamic_array<decltype(shaderInputsMetadata)>(DS1_METADATA_ENTRY_CNT);
{

ICPUDescriptorSetLayout* ds1layout = pipelineLayout->getDescriptorSetLayout(1u);

constexpr IPipelineMetadata::E_COMMON_SHADER_INPUT types[DS1_METADATA_ENTRY_CNT]{ IPipelineMetadata::ECSI_WORLD_VIEW_PROJ, IPipelineMetadata::ECSI_WORLD_VIEW, IPipelineMetadata::ECSI_WORLD_VIEW_INVERSE_TRANSPOSE };
constexpr uint32_t sizes[DS1_METADATA_ENTRY_CNT]{ sizeof(SBasicViewParameters::MVP), sizeof(SBasicViewParameters::MV), sizeof(SBasicViewParameters::NormalMat) };
constexpr uint32_t relOffsets[DS1_METADATA_ENTRY_CNT]{ offsetof(SBasicViewParameters,MVP), offsetof(SBasicViewParameters,MV), offsetof(SBasicViewParameters,NormalMat) };
for (uint32_t i = 0u; i < DS1_METADATA_ENTRY_CNT; ++i)
{
        auto& semantic = (shaderInputsMetadata->end() - i - 1u)[0];
        semantic.type = types[i];
	semantic.descriptorSection.type = IPipelineMetadata::ShaderInput::ET_UNIFORM_BUFFER;
	semantic.descriptorSection.uniformBufferObject.binding = ds1layout->getBindings().begin()[0].binding;
	semantic.descriptorSection.uniformBufferObject.set = 1u;
	semantic.descriptorSection.uniformBufferObject.relByteoffset = relOffsets[i];
	semantic.descriptorSection.uniformBufferObject.bytesize = sizes[i];
	semantic.descriptorSection.shaderAccessFlags = ICPUSpecializedShader::ESS_VERTEX;

	neededDS1UBOsz += sizes[i];
}
}

You don't have to worry about if you don't know what it does, but consider it as important part of setting up input shaders. Okay, just joking - let's sink into it.

Because we know directly layout of the structure placed in the vertex shader, we can define identical one in C++ and that's how it is done. Furthermore, because their memory layouts matches, we can use predefined C++ struct prepared for basic view parameters to set some relative offsets and sizes. We also have to specify what binding it uses, what is the descriptor set type attached - UBO, so IPipelineMetadata::ShaderInput::ET_UNIFORM_BUFFER and place where UBO is used in our case in vertex shader. We have already iterated through all the parameters what caused we've got entire size of UBO - now we will use it to finally create the gpu UBO on driver side.

auto gpuubo = driver->createDeviceLocalGPUBufferOnDedMem(neededDS1UBOsz);

Now we can move to pipeline creation!

asset::SBlendParams blendParams;
asset::SRasterizationParams rasterParams;
rasterParams.faceCullingMode = asset::EFCM_NONE;

auto pipeline = core::make_smart_refctd_ptr<ICPURenderpassIndependentPipeline>(std::move(pipelineLayout), nullptr, nullptr, geometryObject.inputParams, blendParams, geometryObject.assemblyParams, rasterParams);
pipeline->setShaderAtIndex(ICPURenderpassIndependentPipeline::ESSI_VERTEX_SHADER_IX, vertexShader.get());
pipeline->setShaderAtIndex(ICPURenderpassIndependentPipeline::ESSI_FRAGMENT_SHADER_IX, fragmentShader.get());

Pipeline requires some inputs and geometry creator returns some of them, but because we don't have them all we have to provide missing ones and face culling mode to none to avoid disappearing fragments in certain conditions. Two parameter are nullptr because we won't be using push constants. All the philosophy above is about putting push constants and those parameters and shaders into the function. Okay, we are almost done, but before we will create a gpu mesh buffer and gpu descriptor sets, we will use fake ExampleMetadataPipeline class to attach basic view parameters input properties (shaderInputsMetadata) to metadata. This is what we have defined in global space and we will be using it to show how loaders deal with extra data and do it as well while rendering.

assetManager->setAssetMetadata(pipeline.get(), core::make_smart_refctd_ptr<ExampleMetadataPipeline>(std::move(shaderInputsMetadata)));
auto metadata = pipeline->getMetadata();

All the revelant stuff we have done with SBindnings and descriptor set layouts was to provide gpu descriptor sets that we will create right now.

auto gpuDescriptorSet0 = driver->createGPUDescriptorSet(std::move(driver->getGPUObjectsFromAssets(&rawds0, &rawds0 + 1)->front()));
{
	video::IGPUDescriptorSet::SWriteDescriptorSet write;
	write.dstSet = gpuDescriptorSet0.get();
	write.binding = ds0SamplerBinding;
	write.count = 1u;
	write.arrayElement = 0u;
	write.descriptorType = asset::EDT_COMBINED_IMAGE_SAMPLER;
	IGPUDescriptorSet::SDescriptorInfo info;
	{
		info.desc = std::move(gpuImageView);
		ISampler::SParams samplerParams = 
{ISampler::ETC_CLAMP_TO_EDGE,ISampler::ETC_CLAMP_TO_EDGE,ISampler::ETC_CLAMP_TO_EDGE,ISampler::ETBC_FLOAT_OPAQUE_BLACK,ISampler::ETF_LINEAR,ISampler::ETF_LINEAR,ISampler::ESMM_LINEAR,0u,false,ECO_ALWAYS };
		info.image = { driver->createGPUSampler(samplerParams),EIL_SHADER_READ_ONLY_OPTIMAL };
        }
	write.info = &info;
	driver->updateDescriptorSets(1u, &write, 0u, nullptr);
}

auto gpuDescriptorSet1 = driver->createGPUDescriptorSet(std::move(driver->getGPUObjectsFromAssets(&rawds1, &rawds1 + 1)->front()));
{
	video::IGPUDescriptorSet::SWriteDescriptorSet write;
	write.dstSet = gpuDescriptorSet1.get();
	write.binding = ds1UboBinding;
	write.count = 1u;
	write.arrayElement = 0u;
	write.descriptorType = asset::EDT_UNIFORM_BUFFER;
	video::IGPUDescriptorSet::SDescriptorInfo info;
	{
		info.desc = gpuubo;
		info.buffer.offset = 0ull;			
                info.buffer.size = neededDS1UBOsz;
	}
	write.info = &info;
	driver->updateDescriptorSets(1u, &write, 0u, nullptr);
}

We are specifying write parameters used for driver to update descriptor sets respectively. Similar to what we have done with descriptor set layout. Note that we are finally passing gpu objects as descriptor sets to info.desc.

Well, the pipeline is ready, but it's a cpu version and we don't need such. Let's create a gpu one!

auto gpuPipeline = driver->getGPUObjectsFromAssets(&pipeline.get(), &pipeline.get() + 1)->front();

We are ready to provide gpu mesh buffer using parameters delivered from geometry creator object.

constexpr auto MAX_ATTR_BUF_BINDING_COUNT = video::IGPUMeshBuffer::MAX_ATTR_BUF_BINDING_COUNT;
constexpr auto MAX_DATA_BUFFERS = MAX_ATTR_BUF_BINDING_COUNT + 1;
core::vector<asset::ICPUBuffer*> cpubuffers;
cpubuffers.reserve(MAX_DATA_BUFFERS);
for (auto i = 0; i < MAX_ATTR_BUF_BINDING_COUNT; i++)
{
	auto buf = geometryObject.bindings[i].buffer.get();
	if (buf)
		cpubuffers.push_back(buf);
}
auto cpuindexbuffer = geometryObject.indexBuffer.buffer.get();
if (cpuindexbuffer)
	cpubuffers.push_back(cpuindexbuffer);

auto gpubuffers = driver->getGPUObjectsFromAssets(cpubuffers.data(), cpubuffers.data() + cpubuffers.size());

asset::SBufferBinding<video::IGPUBuffer> bindings[MAX_DATA_BUFFERS];
for (auto i = 0, j = 0; i < MAX_ATTR_BUF_BINDING_COUNT; i++)
{
	if (!geometryObject.bindings[i].buffer)
		continue;
	auto buffPair = gpubuffers->operator[](j++);
	bindings[i].offset = buffPair->getOffset();
	bindings[i].buffer = core::smart_refctd_ptr<video::IGPUBuffer>(buffPair->getBuffer());
}
if (cpuindexbuffer)
{
	auto buffPair = gpubuffers->back();
	bindings[MAX_ATTR_BUF_BINDING_COUNT].offset = buffPair->getOffset();
	bindings[MAX_ATTR_BUF_BINDING_COUNT].buffer = core::smart_refctd_ptr<video::IGPUBuffer>(buffPair->getBuffer());
}

auto mb = core::make_smart_refctd_ptr<video::IGPUMeshBuffer>(core::smart_refctd_ptr(gpuPipeline), nullptr, bindings, std::move(bindings[MAX_ATTR_BUF_BINDING_COUNT]));
{
	mb->setIndexType(geometryObject.indexType);
	mb->setIndexCount(geometryObject.indexCount);
	mb->setBoundingBox(geometryObject.bbox);
}

What above does is casting vertex and index buffers and another parameters fetched from geometry creator object to appropriate places for our new mesh buffer and creates gpu one, setting up basic properties fetched from geometry creator object as well. Now we will return all required data needed in next section.

return std::make_tuple(mb, gpuPipeline, gpuubo, metadata, gpuDescriptorSet0, gpuDescriptorSet1);

Yeah, it is enough to start rendering! So let's move to the best part of tutorial - rendering our rectangle!

Rendering process

Let's start with fetching our significant objects and creating an array of descriptor sets. Array is required, because updating descriptor sets is always assuming, that user may put a descriptor set bundle.

auto gpuRectangle = createAndGetUsefullData(rectangleGeometry);
auto gpuMeshBuffer = std::get<0>(gpuRectangle);
auto gpuPipeline = std::get<1>(gpuRectangle);
auto gpuubo = std::get<2>(gpuRectangle);
auto metadata = std::get<3>(gpuRectangle);
auto gpuDescriptorSet0 = std::get<4>(gpuRectangle);
auto gpuDescriptorSet1 = std::get<5>(gpuRectangle);

IGPUDescriptorSet* gpuDescriptorSets[] = { gpuDescriptorSet0.get(), gpuDescriptorSet1.get() };

Great, now only hot loop left.

while (device->run() && receiver.keepOpen())
{
	driver->beginScene(true, true, video::SColor(255, 255, 255, 255));

	camera->OnAnimate(std::chrono::duration_cast<std::chrono::milliseconds>(device->getTimer()->getTime()).count());
	camera->render();

	const auto viewProjection = camera->getConcatenatedMatrix();
	core::matrix3x4SIMD modelMatrix;
	modelMatrix.setRotation(nbl::core::quaternion(0, 1, 0));

	core::matrix4SIMD mvp = core::concatenateBFollowedByA(viewProjection, modelMatrix);

	core::vector<uint8_t> uboData(gpuubo->getSize());
	auto pipelineMetadata = static_cast<const asset::IPipelineMetadata*>(metadata);

	for (const auto& shdrIn : pipelineMetadata->getCommonRequiredInputs())
	{
		if (shdrIn.descriptorSection.type == asset::IPipelineMetadata::ShaderInput::ET_UNIFORM_BUFFER && shdrIn.descriptorSection.uniformBufferObject.set == 1u && shdrIn.descriptorSection.uniformBufferObject.binding == ds1UboBinding)
		{
			switch (shdrIn.type)
			{
				case asset::IPipelineMetadata::ECSI_WORLD_VIEW_PROJ:
				{
					memcpy(uboData.data() + shdrIn.descriptorSection.uniformBufferObject.relByteoffset, mvp.pointer(), shdrIn.descriptorSection.uniformBufferObject.bytesize);
				}
				break;
				case asset::IPipelineMetadata::ECSI_WORLD_VIEW:
				{
					core::matrix3x4SIMD MV = camera->getViewMatrix();
					memcpy(uboData.data() + shdrIn.descriptorSection.uniformBufferObject.relByteoffset, MV.pointer(), shdrIn.descriptorSection.uniformBufferObject.bytesize);
				}
				break;
				case asset::IPipelineMetadata::ECSI_WORLD_VIEW_INVERSE_TRANSPOSE:
				{
					core::matrix3x4SIMD MV = camera->getViewMatrix();
					memcpy(uboData.data() + shdrIn.descriptorSection.uniformBufferObject.relByteoffset, MV.pointer(), shdrIn.descriptorSection.uniformBufferObject.bytesize);
				}
				break;
			}
		}
	}

	driver->updateBufferRangeViaStagingBuffer(gpuubo.get(), 0ull, gpuubo->getSize(), uboData.data());

	driver->bindGraphicsPipeline(gpuPipeline.get());
	driver->bindDescriptorSets(video::EPBP_GRAPHICS, gpuPipeline->getLayout(), 0u, 2u, gpuDescriptorSets, nullptr);

	driver->drawMeshBuffer(gpuMeshBuffer.get());
	
	driver->endScene();
}

That's it, now you are ready to launch the example! Take a look at few things. We are fetching view projection matrix because we want to rotate it to be better visible on a scene. You should also see how updating UBO looks like. We are creating raw uboData where all the struct is "kept" and writing to specialized memory places determined by relative offsets we know from shader input - thanks to metadata. Of course it isn't required and actually in that case it doesn't make any sense, but we wanted to show you how it works. So once ubo raw data is filled, staging buffer gets updated with it. The last calls are for binding created gpu pipeline and gpu descriptor sets and at last - rendering (drawing) mesh buffer. Our rectangle appears on screen!

More examples

Example code I described a little bit you will find here
https://github.com/Devsh-Graphics-Programming/Nabla/blob/master/examples_tests/05.IrrlichtBaWTutorialExample/main.cpp Also I recommend you building and opening our examples where you can see more stuff. You will find them here
https://github.com/Devsh-Graphics-Programming/Nabla/tree/master/examples_tests