-
Posts
1,898 -
Joined
-
Last visited
-
Days Won
94
Content Type
Profiles
Blogs
Forums
Gallery
Pipeline Tools
3D Wiki
Plugin List
Store
Downloads
Everything posted by HappyPolygon
-
You could move the start point of the simulation to a negative time... or convert current state to object and just disable and hide the original.
-
I hope they are in a temporary state where they try to cover their COVID19 financial losses, back when the their updates were poor on good new features. And at some point (probably on the next major release when they stop supporting compatibility with older versions) they will standardize and include all extra capsules in the main application. On the other hand there was a recent long list of job positions, I guess this also has an effect on cost...
-
I have made a suggestion about that, about a year ago. It seems that the Shader field is using only the Plane projection method. And there's definetly something wrong about how it tries to even UV map a texture on a flat surfface. I made a test scene using a different approach. I deformed a sphere using the Desplacer Deformer. And then deleted all 0-level polygons using a bool. Then I put that under the Volume Builder. clouds.c4d The downside is that it's very heavy to compute the bool as the resolution of the objects gets higher to match the texture. So once you are happy with the bool it's best to convert it to a mesh and then toss it in the Volume Builder. Just for you to make a comparison render experiment, you can try using the old method of using just an alpha texture. Turn the Bool method to A without B (won't create base to clouds), turn on the Displacement Channel of the material, lower the strength of it to a lower value like 1 or 2 cm, and add some sub-polygon displacement to it. (my setup is not RS but you can convert it later if you see any beeter results) Texture map from http://www.shadedrelief.com/natural3/pages/clouds.html
-
Are those assets open to investigate how they work ? (underlying node network) If they are then I guess you could re-build them (even by copy-paste big parts from them just before the "wrapping" container node). But if they are protected I'm afraid that seals the fact that they are meant only for specific users.
-
You can put two spheres. one slightly smaller than the other, inside the Volume Builder and use the smaller one in Subtract mode. This will make a thin spherical shell. (it's like bool but with voxels). Then use the Shader Field with the option of "Object below". I had to fiddle with the Low and High Clip values because the SDF does not recognize values between 0 and 1. I used a procedural shader and not a Texture, and I don't know if a texture would apply nicely. This is my setup and I too get steps which I tried to eliminate using a Smooth layer. Fog did not work for me (R23) since the Subtract mode did not work as intended. If that doesn't work for you either you can export the SDF as VDB and import it for RS to recognize. Or just put it under a new Volume Builder in Fog Mode. Here I used the Volume Mesher to further smooth out the surface but it was all lost again when the Fog was re-applied. I guess the solution is somewhere in the middle like exporting the meshed result as VDB and using that in RS without Fog or any volume builder stuff Maybe try to put both a high-def texture with alpha and the resulting Fog from the same texture on top of it and have them somehow complete each other ?
-
I've no idea. I know Vue can do it. Normally if you can't lower the resolution (voxel size) any more (but your hardware can handle more detail) you upscale your models (give the Volume Builder more space). Check this
-
Also I think your clouds are too opaque... I'm no RS user but maybe if you added a bit of transparency (and maybe a bit of emission(luminance) in order to keep them visible due to the small thickness) the voxel's borders might "blend" a bit...
-
My opinion about the near future is not optimistic. Experimental AI code from corporations will leak to secret services of governments (Eastern) with nefarious intents to spread misinformation, panic and up rise using deep fakes. As a counter-measure to that, the rest of the governments (Western) will start using AI in weapon systems. Polarization will start to be more and more prominent towards Orwelian and Machiavelian control systems.
-
I think your cloud layer is too thin. You are trying to add a voxelized shell on a sphere. Steps are inevitable as each voxel has every XYZ coordinate different from its neighbors making it "stand out". I think the issue will disappear if you thicken the atmosphere by 2-3 more voxel layers.
-
How would you add thickness to this shape?
HappyPolygon replied to Freemorpheme's topic in Cinema 4D
In the source code: the link https://cdn.jsdelivr.net/npm/twemoji@11.3.0/2/72x72/1f609.png leads to the message Couldn't find the requested file /2/72x72/1f609.png in twemoji. I don't know what ipsEmoji or twemoji are, they could be plugins or just classes and they are broken. So the link reverts to an alternative icon. A solution could be found here: https://invisioncommunity.com/forums/topic/459276-help-with-slq-query-to-replace-emoji/ the only emojis not broken are the ones at the 5 last categories. I guess replacing or adding emoji pngs on directory https://cdn.jsdelivr.net/npm/ with the appropriate names will solve the issue. The correct path to see all assets and versions of the plugin distribution is https://cdn.jsdelivr.net/npm/twemoji@11.3.0/ and the source download page https://www.jsdelivr.com/package/npm/twemoji apparently it's a tweeter emoji collection -
Godot 4.0 The long-awaited release makes changes throughout the engine, reworking the scripting, networking, audio and UI design toolsets, improving UX in the Godot Editor, and extending platform support. Godot 4.0 is the biggest update to the engine since 2018, and has been rescheduled several times, with some of its planned features backported to the 3.x releases. New Vulkan rendering backend Along with Open 3D Engine, Godot is one of the earlier adopters of Vulkan for real-time rendering, although the API is now starting to be used in DCC tools including ArmorPaint, NeXus and ZBrush. Vulkan is also supported in FSR (FidelityFX Super Resolution), AMD’s open-source render upscaling system, which Godot 4.0 uses to improve in-game frame rates. Better lighting and shadows A new system based on Signed Distance Fields, SDFGI needs little set-up and scales well to large worlds. VoxelGI needs more set-up – it requires pre-baking, so it’s intended for small-to-medium-sized environments – and suffers more from light leaks through geometry, but it’s faster, and supports emissive materials fully. The quality of shadow rendering has also been greatly improved in Godot 4.0. For mobile games, or titles where dynamic lighting and shadows would be too much of a performance hit, Godot supports lightmap baking – now possible on the GPU as well as the CPU in Godot 4.0. Volumetric fog and custom sky shaders Volumetric fog can be applied globally, or to FogVolumes – defined regions within a level – with the option to adjust density, falloff and scattering properties. Other new environment effects include a sky shader system, intended for creating sky backgrounds not possible with Godot’s readymade sky materials, like deep-space environments. Revamped native physics engine Godot’s 3D physics system has also been overhauled, with the open-source Bullet physics framework superseded by a revamped version of the in-house Godot Physics engine. Godot Physics now supports the same set of features as Bullet physics, including soft bodies, clothing and heightmap-based collision with terrain. Improved animation workflow Key changes include a new animation library system, intended to make it easier to reuse animations between projects, a new animation retargeting system, and improvements to animation blending. The Animation Editor itself now supports blendshape tracks, and animation curve editing workflow has been improved, with the option to select and edit multiple curves simultaneously, and to hide individual curves. https://godotengine.org/article/godot-4-0-sets-sail/ Ziva VFX 2.1 Unity has released Ziva VFX 2.1, the latest version of its soft tissue simulation plugin for Maya. The update introduces a new method for detecting collisions between tissue layers, and new tissue stiffness and damping controls to improve the quality of the simulations the plugin generates. The plugin mimics the stiffness, density and volume preservation of real tissues, including bone, tendons, muscles and skin; and supports multiple types of physical damping. Ziva VFX 2.1 introduces a number of new features intended to improve the quality and stability of simulations, including a new method for detecting contacts between tissue layers. Discrete Collision Detection has been replaced by Continuous Collision Detection (CCD), which interpolates the past positions of vertices with their current positions to find contact points. Individual simulation timesteps now take longer to compute, but fewer substeps are required to resolve problems with tissues penetrating one another, which “can often lead to faster simulation times”. The update also adds a new curvature stiffness attribute to the ZMaterial node, enabling users to adjust the resistance of tissues to bending. The zAttachment node, using to connect layers of tissue together, gets a new damping attribute, which can be used to reduce unnatural-looking oscillations in simulations. In addition, the Iterative Solver introduced in Ziva VFX 2.0 as an alternative to the Direct Solver now supports all four integration methods available to calculate the state of a simulation at the next time step. https://docs.zivadynamics.com/vfx/release_notes.html 5f347e832b0404f1ed0a97a7_cheetah center-transcode.mp4 3DF Zephyr 7.0 3Dflow has released 3DF Zephyr 7.0, the latest version of its photogrammetry software. It features performance and workflow improvements throughout the application, with users of the full edition also getting a new wizard for creating orthophotos and automatic registration of imags to laser scan data. 3DF Zephyr 7.0 features performance and workflow improvements throughout the software. User of the full version also get a a new wizard for creating orthophotos from aerial images, and automatic registration of images to laser scan data. The Lite edition now supports an unlimited number of source photos. In addition, 3Dflow now offers floating licences for both the Lite and full editions. https://www.3dflow.net/3df-zephyr-7-0-is-out-now/ Move.ai The app, promoted as an alternative to conventional optical and intertial motion-capture systems, promises to enable users to extract production-quality motion data from footage captured on two to six iPhones. Move.ai’s app is rather different: instead of being used on a single handheld phone, it is designed for use on between two and six tripod-mounted iPhones, arranged to create a capture volume. That puts it in a similar part of the market to camera-based markerless motion-capture systems like iPi Soft’s iPi Motion Capture, although unlike with iPi, processing is done in the cloud, not the user’s local machine. Move.ai compares the quality of the data generated to commercial inertial capture suits – or, if using the full six iPhones, to commercial marker-based optical capture systems. In support of that claim, the firm has posted comparison videos between its own technology and existing systems from Rokoko, Xsens, OptiTrack and Vicon on its website. The footage from the capture sessions is then processed online on Move.ai’s servers: it can be uploaded directly from the iPhones, or sent to the host device to upload later. During processing, users can choose to retarget the raw data to a custom character rig, providing it conforms to Move.ai’s standard 49-bone structure. Custom rigs can be uploaded in FBX format, with the option to upload a separate Maya file with rig controls. The resulting animation can be downloaded in FBX, BVH or USD format, as a Maya HumanIK file, or as a Blender scene containing the raw and retargeted motion, the character mesh, and the camera positions. That makes it possible to use the data in most DCC applications and game engines: the documentation has walkthroughs for Reallusion’s Character Creator, MotionBuilder, Omniverse, Unity and Unreal Engine. https://apps.apple.com/us/app/move-ai/id1642699132?platform=iphone SketchUp 2023.0 & SketchUp for iPad 6.2 The update iteratively improves modelling workflow, adding a new unified Flip tool; and adds support for referencing CAD files in DWG format to LayOut, SketchUp’s module for creating documentation. Users with Studio subscriptions also get a new Revit importer. New features in the desktop edition of SketchUp 2023.0 include a single unified Flip tool for mirroring geometry selections along orthogonal planes, replacing the old Flip Along commands. There are also a number of smaller improvements to modelling workflow, including the option to remove faces or edges from a selection set, and to reposition the drawing axes by double-clicking. Outside the core application, a new Overlays system lets SketchUp Extensions update dynamically while other tools or Extensions are in use. Users with Studio subscriptions get a new Revit importer, to streamline the process of importing data from Autodesk’s BIM software. SketchUp for iPad 6.2 adds new Boolean modelling tools, customisable shortcut menus for quick access to commonly used tools and commands, and the option to save components of a project as separate files. It includes a number of features designed to speed up workflow when using a stylus rather than a mouse and keyboard, including Autoshape, an AI-trained tool for converting quick sketches into 3D forms. Other iPad-specific features include Markup mode, which enables users to annotate a SketchUp model; and the option to add photographic textures to a model or view it in augmented reality using the iPad camera. Users can transfer models to the desktop edition via online collaboration platform Trimble Connect, or export them to other DCC apps in a range of standard 3D formats, including OBJ and USDZ. https://forums.sketchup.com/t/sketchup-for-ipad-release-notes/174473 Plasticity Public Beta Plasticity features a Blender-like UI, key bindings and selection modes, and is intended to enable artists to create hard-surface assets quickly and intuitively. Plasticity is due for a stable release on 5 April 2023, and is currently available to pre-order. Users can quickly create complex hard-surface models by combining 2D curves and 3D primitives through Boolean operations, extruding faces, and chamfering or filleting edges, as shown in the video above. It is also possible to arrange objects in arrays, although currently only radial arrays are supported. Completed models can be exported in OBJ, STEP or IGES format for use in DCC or CAD software, and in the native formats of Rhino and Parasolid. Plasticity is available in free public beta for Windows, Linux and macOS. The software is due for a commercial release on 5 April 2023, and is now available to pre-order. Indie licences cost $99 and import and export files in OBJ, STEP and Parasolid XT format. Studio licences support a wider range of file formats and cost $299. Both are perpetual node-locked licences. There is also a free trial edition, which only imports STEP files and export OBJ files. https://www.plasticity.xyz/ Stability.ai The tool makes it possible to use the open-source AI image generation model inside Blender, either to convert existing images to textures, or to use a 3D scene to guide the image generated. It’s even possible render out entire Blender animations while keyframing the Stable Diffusion parameters, to create interesting stylised motion-graphics-style effects. There are aleady free plugins integating Stable Diffusion into Blender – we covered Ben Rugg’s AI Render last year – but Stability for Blender is Stability.ai’s official version. Stability for Blender is compatible with Blender 3.0+. The source code is available under a MIT licence. The plugin itself is a free download, but to generate images using it, you will need to link it the API key of an account at DreamStudio, Stability.ai’s browser-based frontend. Registering for an account is free and comes with 100 credits for generating images: the number needed depends on the resolution and number of processing steps used. Further credits cost $10/thousand. https://blendermarket.com/products/stability-for-blender Ayon Ynput has released Ayon, its open-source pipeline for visual effects and animation, in early access. The platform, previously known as OpenPype, provides the foundation for a VFX or animation production pipeline, connecting a studio’s DCC tools, version tracking and project management into a unified system. It can be used by studios of any size, but will probably be most of interest to start-ups and smaller facilities without a team of dedicated TDs to set up or manage pipelines. In development for five years, and used in production on music videos, animated series and movies Via a browser-based dashboar, users can track the artists assigned to a project, versioning and approvals for individual shots or assets, and see a visual overview of the progress of the project as a whole. It integrates with a range of key DCC applications: 3D content creation tools like 3ds Max, Blender, Houdini, the Substance 3D products and Maya; compositing and editing tools like After Effects, DaVinci Resolve and Nuke; and 2D animation software like Toon Boom Harmony and TVPaint. For production tracking, Ayon is tightly integrated with ftrack Studio, along with ShotGrid and Kitsu. The platform supports workflows based around USD and real-time rendering in Unreal Engine out of the box, with the intention to add support for new production standards as they emerge. Ayon is available free in early access. It can be deployed on Windows, Linux and macOS. Ynput hasn’t announced a final release date yet. https://ynput.io/ Baguette Framestore rigging supervisor Nims Bun has released Baguette, his nodal rigging system for Maya, for free. The toolset, previously used in production at Pixomondo, lets artists create flexible, reusable rigs for characters and vehicles, and includes a number of features designed to streamline production work. Although it has only just been publicly released, Baguette has been in development for ten years. In an interview with 3DVF last year, Bun said that he began work on it shortly after finishing on the Minions movie in 2012, going on to develop Baguette full-time for eight months. Development then paused as Bun went on to work for VFX facilities including Sony Pictures Imageworks and Industrial Light & Magic, resuming when he joined Pixomondo in 2018, where he became rigging lead. Pixomondo then used the toolset in production for several movies, beginning with Goosebumps 2. Baguette is available for Maya 2019+ on 64-bit Windows only. You can download compiled binaries from the GitHub repository. Part of the source code is available under a MIT licence. https://github.com/nimsb/Baguette Twinmotion 2023.1 Preview 2 Epic Games has unveiled Twinmotion 2023.1, the next major version of its real-time visualisation software. The second public preview, due out later this month, reworks the software’s interface to streamline access to common tools, adds support for decals and volumetric fog in the path tracer, and a new video denoiser. The first public preview, released in December, added support for LED walls for virtual production, shifted support for VR headsets to OpenXR, and overhauled Twinmotion’s material system. One very visible change in Twinmotion 2023.1 is the new interface design, which Epic Games says is intended to make Twinmotion more accessible to users working in fields outside architectural visualisation. The new layout is designed to reduce the number of mouse clicks need to access key tools and settings, with a Properties panel to the right of the screen, shared with the scene graph, and a library panel on the left. For access to more advanced controls, users can switch to a two-column layout, hiding the library panel. The height of the bottom bar has been reduced, maximising the screen space devoted to the viewport. It is also now possible to change the scale of the interface manually – there are presets ranging between 75% and 200% – enabling users to trade viewport space against the legibility of interface text. However, one commmonly requested change to the interface that will not be available in the release is the option to detach interface panels to display on a second monitor. The software – which uses Unreal Engine for rendering – is also being updated to Unreal Engine 5.1. The change will make it possible to support more features in Path Tracer, the new path tracing render engine introduced last year, including decals and volumetric fog. Other new features due in Twinmotion 2023.1 Preview 2 include an experimental video denoiser, making it possible to denoise rendered videos as well as still images. The release will also introduce viewport resolution scaling, letting users boost frame rates when working on complex scenes by having Twinmotion render the viewport at lower resolution, then upscale to fit the screen. New licences have an MSRP of $499. The software is free for students and educators, and there is also a free trial edition of the software which caps export resolution at 2K, but which is otherwise fully featured. https://www.twinmotion.com/en-US/docs/twinmotion-release-notes GeoTracker KeenTools has released GeoTracker for After Effects 2023.1, the latest version of its real-time 3D object tracking plugin for Adobe’s compositing software. The update adds a built-in human head model, streamlining the process of tracking an actor’s face, plus a new surface masking system for controlling parts of the 3D model to be excluded from the track. GeoTracker for After Effects 2023.1 adds a generic built-in human head primitive, making it possible to track actors’ faces without the need to import your own custom 3D head model. There is also a new mask type, Surface Mask, used to exclude parts of the model from a track. You can see it in use in the video at the top of the story to exclude the parts of the actor’s forehead obscured by his hair to improve the quality of the resulting track. Users can also now export any of the built-in primitives, including the head, in OBJ or FBX format. The software is rental-only, with subscriptions also providing access to GeoTracker for Nuke. Individual Freelancer subscriptions cost $18/month or $179/year; floating Studio subscriptions cost $499/year. https://keentools.io/news/2023-1 MoonRay renderer DreamWorks has released the source code of MoonRay. A high-performance Monte Carlo ray tracer capable of photorealistic and stylised output Although DreamWorks has made individual in-house technologies available to the public before – it open-sourced sparse volumetric data format OpenVDB in 2012 – MoonRay is a beast of a different scale. Developed to replace Moonlight, the studio’s old rasterisation renderer, MoonRay is a high-performance Monte Carlo ray tracer. It was designed with the aim of keeping “all the vector lanes of all the cores of all the machines busy all the time”, and has an hybrid GPU/CPU rendering mode capable of “100% output matching” with CPU rendering. As well as DreamWorks’ trademark stylised animation, MoonRay is capable of photorealistic output, and has the key features you would expect of a VFX renderer, including AOVs/LPEs, deep output and Cryptomatte. It should also play nicely in a standard production pipeline: in the Siggraph presentation linked above, DreamWorks describes it as integrating with Maya and MotionBuilder as well as its own lighting tools. MoonRay also comes with a Hydra render delegate, hdMoonRay, which will make it possible to integrate as an interative viewport renderer in DCC software that supports Hydra delegates, like Houdini and Katana. Arras framework distributed final-quality, interactive and multi-context rendering Along with the core renderer, DreamWorks is open-sourcing Arras, its distributed computation framework. As well as final-quality output, it can be used to accelerate interactive rendering, and for ‘multi-context rendering’ during look dev, visualising multiple lighting or material variants across shots and sequences. https://github.com/dreamworksanimation/openmoonray https://openmoonray.org/index https://docs.openmoonray.org/ ProRender 3.1 AMD has released the SDK for Radeon ProRender 3.1, the next major version of its hardware-agnostic physically based GPU renderer. First released under its current branding in 2016, Radeon ProRender is an unbiased path tracer with a fairly standard range of production features, including integrated AI denoising. It’s GPU-accelerated, hardware-agnostic, and can run across Windows, Linux and macOS. Free integration plugins for Blender and Maya, and a Hydra delegate for Houdini are in active development, and the renderer is also integrated into SolidWorks Visualize. ProRender is also integrated in Modo, though has been superseded there by newer renderers, and was once integrated in Cinema 4D. Its 3ds Max and Unreal Engine plugins have not been updated since 2021. Despite the version number, Radeon ProRender 3.1 is the first public release of the SDK in the 3.x series: there doesn’t seem to have been a ProRender 3.0. New features include light linking, enabling users to set lights to illuminate only selected objects in a scene. Other changes include a bevel shader, for generating rounded edges on geometry at render time; new AOVs for volumes and subsurface scattering; and transparency and material blending in the toon shader. Hybrid Pro, Radeon ProRender’s hybrid ray tracing/rasterisation viewport renderer, now supports displacement mapping and should now run on older AMD graphics cards as far back as the Polaris GPUs. AMD has also implemented a new sampling strategy, along with the Nvidia-developed ReSTIR and SVGF, which should result in the viewport resolving to an acceptably low level of noise more quickly. On Windows and Linux, the rendering backend has been switched from OpenCL to HIP (Heterogeneous-Compute Interface for Portability), AMD’s API and kernel language for developing portable applications. Also used in Blender’s Cycles renderer and the upcoming Redshift for AMD, it lets software developers support AMD and Nvidia GPUs from a single code base. As a result of the change, Radeon ProRender’s rendering kernels are now pre-compiled instead of having to be compiled at runtime, which should reduce the time to first pixel when rendering on GPU. https://gpuopen.com/learn/prorender_sdk_3_1_0/ https://github.com/GPUOpen-LibrariesAndSDKs/RadeonProRenderSDK/releases/tag/v3.1.00.patch1 Nerfstudio Open-source Neural Radiance Field framework Nerfstudio now supports 360-degree video. Nerfstudio 0.1.19 supports equirectangular video, making it possible to generate 3D scenes matching footage recorded on 360-degree cameras and export them to DCC apps for use in games, VFX or visualisation. Nerfstudio provides a simple API that allows for a simplified end-to-end process of creating, training, and visualizing NeRFs. The library supports an interpretable implementation of NeRFs by modularizing each component. With modular NeRF components, we hope to create a user-friendly experience in exploring the technology. Nerfstudio is a contributor-friendly repo with the goal of building a community where users can easily build upon each other’s contributions. You can find a full list of dependencies and compilation instructions on the project website. In particular, note that Nerfstudio uses CUDA, so you will need a compatible Nvidia GPU to run it. https://docs.nerf.studio/en/latest/ https://github.com/nerfstudio-project/nerfstudio/ Lumion 2023.0 The compatibility-breaking update adds a new hybrid render engine with support for real-time ray tracing on both AMD and Nvidia GPUs, and extends the software’s PBR material system. The key change in Lumion 2023.0 is support for ray tracing: its new hybrid rasterisation/ray tracing render engine makes it possible to create physically accurate lighting, shadows and reflections in renders. The change will bring Lumion into line with other real-time visualisation tools like Enscape and Twinmotion, which already support hardware-accelerated ray tracing. The new Ray Tracing Effect is supported on both AMD and Nvidia GPUs – although not currently Intel Arc – with renders falling back to rasterisation on older cards. Lumion 2023.0 also extends the software’s PBR material system to support eight map types, now including metalness, emissive strength, reflectivity and opacity. Both editions of the software get support for subsurface scattering, for recreating translucent materials like marble; and the Pro edition gets a clearcoat material layer for recreating car paint and varnished surfaces. Other key changes include a new transform gizmo for positioning objects more precisely in the viewport; and improvements to the Auto Snap feature, for snapping objects to other surfaces. It is also now possible to import camera paths in FBX or Collada format, or to choose from four new preset camera behaviours, including standard dolly and pan/tilt shots, and options to orbit or follow an object. Users can also choose from a range of preset aspect ratios when rendering still images or video. The release also marks a change in version numbering – under Act-3D’s old system, it would have been Lumion 13.0 – and of licensing, with the software now available subscription-only. https://support.lumion.com/hc/en-us/articles/7441741355804 Canvas 1.4 Canvas 1.4 adds the option to generate 360-degree environments as well as standard 2D images. Both the source sketch and the landscape image are displayed in equirectangular format, with an additional 3D preview to show how the image will look as a spherical environment. The result can be exported in EXR format – also new in this release – at resolutions up to 4,096 x 4,096px. The exported image can then be used as an environment map in DCC tools like Blender or Omniverse, or in game engines: the video at the top of the story shows one being used as a skybox in Unreal Engine 5. Nvidia doesn’t describe the images generated as HDRIs, so we’re checking what kind of dynamic range they have, but the product website does say that they can be used to change the ambient lighting of a 3D scene. System requirements Canvas 1.4 is available for Windows 10. The software is free, and is still officially in beta. You need a Nvidia GeForce RTX, Titan RTX or RTX GPU with version 522.06+ of Nvidia drivers to use it. https://www.nvidia.com/en-us/studio/canvas/ Light Tracer Render 2.6 The update reworks materials with surface coatings, with the clearcoat layer – for recreating materials like car paint or varnish – getting a roughness control, and a new Iridescence property for thin-film effects. Changes to lighting and rendering include spectrally accurate dispersion, for more accurate rendering of transparent materials like glass, and support for bi-directional path tracing, for more accurate caustics. For non-photorealistic rendering, it is now possible to render only the outlines of objects to create sketch-style visualisations, or to overlay the outlines onto conventional renders. Other changes include for the OpenEXR format, both for importing HDRIs and exporting renders; support for vertex colours on files imported in FBX format as well as glTF and PLY; and the option to export OBJ files. Pricing and system requirements The desktop version of Light Tracer Render 2.6 is available for Windows 10+ and macOS 10.15+. New perpetual floating licences cost $99; rental costs $9/month or $72/year. https://lighttracer.org/ Nvidia and Shutterstock to build AI text-to-3D service Shutterstock VP of 3D innovation Dade Orgeron described the creation of production-quality 3D models from text prompts as the “Holy Grail” of generative AI art tools. On its launch, Shutterstock expects its new text-to-3D service to generate 3D models of a quality suitable for non-demanding hobbyist work, or as a base that can be refined manually for commercial projects. Processing will be done online, and is expected to take around 15 minutes per model. Initially, the service is expected to generate single meshes with single textures, although the generation of more complex multi-part models may become possible in future. Its most likely initial use is to create content for industrial digital twins, recreating real-world objects like machinery, buildings and cars, but Shutterstock expects it to be adoped for entertainment work in future. The underlying AI models will be trained on assets from TurboSquid, the online marketplace that Shutterstock acquired in 2021, and which currently includes over 1.5 million 3D models. The use of artists’ work to train AI tools is a contentious issue: something we’ve touched on in stories like this one on Glaze, a free tool intended to prevent unauthorised use of images to train “unethical” AI models. Shutterstock describes its own policy – introduced following its partnership with DALL-E developer OpenAI to create its online text-to-image generator – as “responsible AI”. Artists can opt out of having their content included in AI training datasets, although Shutterstock told us that only around 10% of its users had done so since the option was added to account settings last month. Those that don’t opt out are paid for the use of their assets from a contributor fund, with payments made every six months, although Shutterstock hasn’t published details of how earnings are calculated. Shutterstock’s new text-to-3D service is being developed using Picasso, Nvidia’s new cloud-based platform for building and deploying generative AI tools, also announced at GTC 2023. Picasso is aimed at developers rather than end users, with services being rolled out via Nvidia’s partners. As well as Shutterstock, Nvidia has partnered with Getty Images to develop new text-to-image and text-to-video models trained on its stock images, and has extended its existing partnership with Adobe. Shutterstock and TurboSquid’s new text-to-3D capabilities are expected to enter beta in “Q4 2023”. On Shutterstock, they will be available via the Creative Flow suite of apps. Subscriptions cost $12.99/month. https://investor.shutterstock.com/news-releases/news-release-details/shutterstock-teams-nvidia-build-ai-foundation-models-generative Silo 2023.2 Silo 2023.0 is something of a jump in version numbering, given that Silo 2021.5 was only released in March, meaning that Nevercenter has gone through its entire set of 2022 releases in just six months. The shift to Venusian years notwithstanding, Silo 2023.0 is a significant update, bringing procedural modelling to the software. The release introduces a modifier stack, enabling users to modify geometry non-destructively, toggling individual modifiers on and off or changing the order in which they are applied without affecting other edits. There are currently six modifiers: Bevel, Crease Edges, Extrude, Numerical Deform, Shell and Triangulate. Numerical Deform is a combined twist and taper modifier; the others are self-explanatory – or at least will be familiar to users of other 3D modelling software. Users can also bake modifiers to convert a mesh to standard non-procedural geometry, with the option to do so automatically when exporting a model. The update adds a new Subdivide modifier in addition to the existing subdivision system, making it possible to toggle subdivision on and off, or move the modifier around in the stack. Other changes include multi-monitor support, and a new Arch primitive, for creating architectural arches, and similar shapes like “tunnels … mailboxes, gravestones [and] half-pipes”. https://nevercenter.com/silo/features/#release_notes Substance 3D Modeler 1.2 Substance 3D Modeler 1.2 features a number of changes to improve the import and export of 3D models. Although users could already set a target polygon count for exported meshes, the update introduces a new Adaptive Factor slider to control how much mesh density increases in areas of high detail. In addition, imported meshes are now included in exported files without needing to be converted to clay first, to speed up kitbashing workflows, and work on larger scenes that include external content. Conversion to clay is also now supported on meshes with non-manifold geometry. Artists working with imported CAD data or preparing models for 3D printing can now import and export files in STL, IGS, JT and STEP format. Following Adobe’s switch to the OpenXR standard in Substance 3D Modeler 1.1, the software also now supports Pico’s VR headsets, along with Meta’s new Quest Pro. Workflow improvements include a new Keep Upright setting to reduce motion sickness in VR, the option to switch to desktop mode manually, and an automatic prompt to save work on closing the application. Now available as $149.99 perpetual licences via Steam https://substance3d.adobe.com/documentation/md/current-release-2023-03-20-v1-2-253984847.html FumeFX 6.0 for 3ds Max It’s a major update, turning the popular gaseous fluid simulator into a complete multiphysics system, with a new node-based architecture for creating particles, rigid bodies and soft bodies, including cloth and rope. It is also possible to mesh particle systems to liquids, and integration with Autodesk’s Arnold renderer has also been extended, making it possible to render splines connecting particles as Arnold curves. Other changes include a new animation playback system, which can be used for flocking simulations. The main new feature is NodeWorks, a new node-based environment for authoring simulations by wiring together over 140 readymade nodes. NodeWorks is also capable of generating flocking simulations, with a new character animation control system making it possible to simulate groups of creatures or people, as well as simple geometry. Users can trigger new animations according to other events in a simulation, with FumeFX blending smoothly between the two animation states. Updates to voxel simulations include a new Pyro vorticity type for adding detail to fire and smoke: according to the online documentation, the result “resembles pyroclastic flows with lots of tiny details”. In addition, Sitni Sati has discontinued perpetual licences of FumeFX, making the software subscription-only. One-year subscriptions have a MSRP of $365/year for the full software or $95/year for a simulation-only licence: roughly half the price of the old perpetual licences, which cost $695 and $195 respectively. https://www.afterworks.com/FumeFX.asp Unity AI Unity has posted a teaser video for Unity AI, a mysterious initiative it describes as “building an open and unique AI ecosystem that will put AI-powered game-development tools in the hands of millions of creators”. Generative AI art tools have been the flavour of the week, with Adobe announcing Firefly, its suite of online AI art tools, including a text-to-image generator, and Nvidia making several more announcements, including an upcoming text-to-3D model generator, developed in partnership with stock asset library Shutterstock. Unity’s video is considerably more enigmatic, showing a user typing in plain text prompts like, ‘Give me a large scale terrain with a moody sky’ and ‘Add a dozen NPCs… make them flying alien mushrooms’. Sadly, the video doesn’t actually show the large scale terrain or the flying alien mushrooms, so we have no idea in what form they would be generated inside the game engine – if, indeed, they are generated at all. However, it may refer to a program for third-party developers, rather than something Unity is developing itself: according to Reuters, Unity “aims to open a marketplace for generative AI software”. https://create.unity.com/ai-beta Photoshop 24.3 Like Photoshop 24.2 last month, Photoshop 24.3 is a small workflow update to the Share for review system added to Photoshop last year. Users can now navigate to the Share for review dialog directly from the Share button, and can create review links from a PNG, JPG, TIFF and PDF files without needing to re-save them to a supported format. Photoshop 24.3 is available for Windows 10+ and macOS 11.0+ on a rental-only basis. In the online documentation, the update is also referred to as the March 2023 release or Photoshop 2023.3. https://helpx.adobe.com/photoshop/using/whats-new/2023-2.html#more-in-share-for-review Magic Squire 7.0 Digital painting tools developer Anastasiy Safari has released MagicSquire 7.0, the latest version of his Photoshop brush organisation plugin. New features in MagicSquire 7.0 include the option to assign background images to brush groups to help identify them at a glance. The images are displayed in the UI, behind the brush thumbnails. Users can also now choose colours for the thumbnails themselves, again to differentiate brushes visually. Workflow improvements include the option to Favorite brushes by right-clicking them, and to filter brushes and tools by tool type: for example, to view only Eraser or Smudge brushes. It is also possible to import brushes directly from Photoshop, rather than having to export .abr or .tpl files. In addition, the entire UI is now scaleable, not just the brush thumbnails, making it possible to adjust the size of the interface and controls to improve legibility on HiDPI displays or high-resolution pen displays. MagicSquire 7.0 is available for Photoshop CS5 and above on Windows and macOS. It costs $19. https://blog.anastasiy.com/?p=2395 MAXON ONE Spring Release 3DS MAX 2024 The update adds a new Boolean modifier for procedural modelling, extends the Array modifier and Slate Material Editor, and introduces experimental support for OCIO colour management. Other changes include improvements in import of STL files, which is now “up to 10,000x faster”. In addition, Autodesk Revit Interoperability and Autodesk Inventor Interoperability are now installed on demand when first importing a Revit or Inventor model, rather then being part of the 3ds Max installer. Outside the core application, 3ds Max’s Arnold and Substance plugins have been updated. https://help.autodesk.com/view/3DSMAX/2024/ENU/?guid=GUID-5F6E1DF3-C9E4-4784-A5C3-31C2490688D0 Maya 2024 It’s a wide-ranging update, expanding the software’s retopology toolset, adding a complete new USD-based material authoring system, and neat new brush-based workflows for ‘sculpting’ animation curves. The release also introduces native Apple Silicon support, with the core application, its Bifrost multiphysics plugin and the MtoA plugin for the Arnold renderer all running natively on Macs with M1 and M2 processors. https://help.autodesk.com/view/MAYAUL/2024/ENU/?guid=GUID-29E8C53B-A201-41F5-94A8-4562C13AC219 3DCoatTextura 2023 Key changes include a new screen-based colour smoothing tool, improvements to the colour picker and UV mapping, support for ACES tonemapping, and an updated version of the AppLink plugin for Blender. As with 3DCoat 2023.10, released alongside it, the online changelog includes features added over the entire previous year, including Power Smooth, a new colour smoothing tool in the Paint Workspace. Pilgway describes it as “a super-powerful, valence/density independent, screen-based color smoothing tool” for when users need much stronger smoothing than the standard smoothing effect applied by holding [Shift]. The Color Picker has also been updated, with changes including support for hexadecimal colour values. Other changes include updates to UV unwrapping, with each connected object now unwrapped in its local UV space, and general improvements resulting in fewer UV islands and shorter UV seams being generated. The update also adds support for ACES tonemapping, as shown in the video above, and improves the turntable rendering system, including the option to render at a higher resolution than screen resolution. In addition, the Blender AppLink, the software’s Blender integration plugin, is now being maintained directly by Pilgway, and has received updates and bugfixes. https://pilgway.com/release_note/3dcoattextura/v202310 Blender 3.5 New features range from support for vector displacement maps when sculpting to the foundations of an interactive real-time compositor, by way of a huge set of readymade assets for generating and styling hair. Below, we’ve picked out five of the most significant changes, along with smaller updates to the core toolsets, including hidden gems like the ability to copy and paste UVs between models. Blender 3.5 also features updates to most of the software’s other core toolsets, with Mac users getting a new Metal backend for the viewport. To judge from the benchmark results in the release notes, it’s 1.5-3x faster than the old OpenGL backend. The UV toolset gets a handy option to copy and paste UVs between groups of faces with the same topology. UVs can be copied between UV channels, between meshes, or even between .blend files. Changes to the animation tools include a new Ease operator in the Graph Editor and workflow improvements to the Pose Library, including the option to flip poses directly from the context menu. Grease Pencil, Blender’s 2D animation toolset gets more updates than we can cover in two lines, but fortunately, they’re nicely summarised in the video embedded above. The motion tracking toolset gets a change to make it possible to change the underlying resolution of a movie clip without losing the optical centre, where this differs from the centre of the frame. Changes to file import and export include support for USD shape primitives when importing data in Universal Scene Description format, and support for the USDZ file format, often used in AR projects. Outside the core application, there are updates to several key extensions, including Sun Position and the glTF 2.0 importer and exporter, and compatibility-breaking changes to the Python API. https://wiki.blender.org/wiki/Reference/Release_Notes/3.5 RandoMixer Quickly generate design, materials and lighting variants of your 3D models and scenes RandoMixer generates variations of sets of objects in a scene, randomising the position, rotation and scale of meshes, the materials applied to them, whether scene lights are on or off, and which view camera is used. That makes it a powerful tool for brainstorming, automatically generating sets of variant looks for 3D models, scenes or even stylised 3D characters, in order to explore design ideas, or to present alternatives to clients. It includes a set of ‘smart scattering’ tools for set dressing, capable of automatically placing smaller objects like books and ornaments on furniture within a scene, also available as a separate plugin. It’s even possible to export collections of variations to the 3ds Max timeline, making it possible to generate simple motion-graphics-style animations, as shown towards the end of the video above. Pricing and system requirements RandoMixer is available for 3ds Max 2015+. A perpetual licence costs $120 for use on up to two machines. https://www.splinedynamics.com/randomixer/ 3DCoat 2023 New features include support for soft selection during modelling and retopology, a timelapse screen recorder, and better performance when subdividing meshes. Other changes since 3DCoat 2022 include a new multi-resolution sculpting system, improvements to the Sketch tool, painting tools, and file export, including export to Blender and Unreal Engine. Pilgway has also released 3DCoatTextura 2023, the latest version of the cut-down edition of 3DCoat for texture painting and rendering. The release also introduces a new timelapse screen-recording tool, which records the screen at user-specified intervals to generate a timelapse video of a model being created. Other changes include improvements to AutoMap, 3DCoat’s automatic UV unwrapping system. In addition, 3DCoat 2023 supports the export of models in IGES format: functionality that will eventually be moved to a separate paid add-on module. Prices due to rise from 11 April 2023 With the release of 3DCoat 2023, Pilgway is also raising the price of perpetual licences and subscriptions to 3DCoat: its first price increase since launching its current online store. The Ukraine-based firm attributes the rise to the effects of the ongoing Russia-Ukraine war on the economy. The increase doesn’t come into force until 11 April 2023, so you can buy 3DCoat at its old price until then. Price and system requirements 3DCoat 2023 is available for Windows 7+, Ubuntu 20.04+ and macOS 10.13+. For individual artists, new perpetual node-locked licences of 3DCoat cost €439. Subscriptions cost €20.80/month or €169.85/year. Rent to own plans require 11 continuous monthly payments of €41.60. https://3dcoat.com/forum/index.php?/topic/25785-3dcoat-20212-development-thread/ Cesium plugins for UE5, O3DE, Unity, Omniverse The plugins enable developers to use high-resolution 3D geospatial data, such as that captured by aerial photogrammetry and LIDAR scans, to reproduce the real world inside popular game engines. A runtime 3D Tiles engine makes it possible to stream massive 3D geospatial datasets into the host engine in real time, and provides level of detail and caching functionality. The resulting digital replica of the Earth conforms to the WGS84 standard used by GPS systems. Suggested use cases include open-world games, architectural visualisations and military training apps. o make use of the Cesium plugins, users need a source of geospatial data to pull into the game engine. By default, that’s Cesium ion, Cesium’s own platform, which provides streamable content including world terrain, buildings and imagery from Bing Maps, and which is integrated inside the plugins. However, the plugin is open-source, so developers can create their own integrations for other platforms. https://www.unrealengine.com/marketplace/en-US/product/cesium-for-unreal https://github.com/CesiumGS/cesium-o3de/releases https://github.com/CesiumGS/cesium-unity/releases https://github.com/CesiumGS/cesium-omniverse/releases
-
If you are going to use RedShift, anything other than Nvidia is useless. (So the ASUS is not recommended)
-
How would you add thickness to this shape?
HappyPolygon replied to Freemorpheme's topic in Cinema 4D
What seems to be the problem ? The cloth Surface generator seems t work fine . -
Modelling/sculpting shape of boole from object
HappyPolygon replied to Ben Kagiki's topic in Cinema 4D
My approach would include the Voronoi Fracture Object. You can later use an Effector to hide fragments revealing parts underneath. For extra details enable Detailing (slow). You could use a Displacement Deformer for faster results -
I have no idea. I suggest having a visit to each site and have a good market research on their capabilities. They probably have demos to try out.
-
That was so clickbait and so cringe when I first watched it (about a month ago), I was furious on how they insisted on not saying much about what they were showing. Media (even the more οbjective, scientific ones) did not make sure the message was communicated properly to the Average Joe. Everyone thought they were witnessing a LEGO robot turn to liquid move and reassemble. What really was happening was the witnessing of a magnetic event liquefying a solid state piece and moving it out on an enclosed space. Then, off-camera they re-casted the liquid to it's previous form. But everyone thought the actual innovation was to morph the metal to any form ! Don't forget that these kind of AIs are trained on a vast amount of internet texts. Those also include conversations, concerns and movie plots about AI itself. Most of them are about an angry AI. It's not weird at all to expect from the trained AI to respond accordingly to what it has read. And yes, AI is biased. And we didn't hear from the video presenter asking "Why do you want to kill humans ?". Very convenient for making a clickbait video for people who love terror. There is no reason for an AI to have feelings or preferences. Why should it preferer killing people or preserving life ? It's not a personal opinion. It just does what it has learned (like humans). You could feed an AI all communist texts ever written and have it construct the "perfect" world (which would eventually collapse). You could feed an AI all Nazi texts and have it be the Hitler's favorite bot. You could feed an AI all philosophical texts of humanity and have it be the most illogical person you've ever met....
-
There's also FumeFX. Doesn't need RedShift. As far as I know TurbulenceFD doesn't need RS either.
-
Sorry, I didn't mean to offend you on your suggestion. It was just a comment on the specific way he chose to tackle the effect. Anyway here's my try. I felt so frustrated that most of the deformers I wanted to use did not support for fields... Not really proud of this since I had to fake it. @georgedsee I left the Correction Deformer as a proof of how I used it to create the spline path. It's completely useless, you can delete it. The carpet looks like it retracts a bit in the end. That's because the actual sweep spline turned out to be a bit longer than the actual carpet length... Talking about luck 'cause I just eyeballed the unrolled length... You can trim it by one or two segments but don't forget to also trim the animation paths also. I also changed the rolled spline to Adaptive Intermediate points for better distribution. Why did you use the MoSpline? I figured out it had no actual consequence in the scene. I not sure for the Delay effector either... to my setup it has no useful effect either. Now the problem is that the rolled part remains at constant radius instead of shrinking. I'll have one last look at it this evening. roll (unravel).c4d
-
Change Scale to Size...
-
There is a plugin called Dépliage I guess it could help make this effect very easy. Unfortunately it's available up to R23. There's also this script which I haven't tested in R23. From the screenshots I guess it could somehow make a similar effect if put under a Subdivision Surface.
-
I liked how he tackled his problem but call me perfectionist, I really cannot ignore the conservation of mass problem. His rolled part of the geometry always looks the same no mater how long the carpet is. So although he really put effort in this it looks faked and someone could assume other methods of replicating it if the method wasn't revealed.