-
Posts
1,913 -
Joined
-
Last visited
-
Days Won
97
Content Type
Profiles
Blogs
Forums
Gallery
Pipeline Tools
3D Wiki
Plugin List
Store
Downloads
Everything posted by HappyPolygon
-
Rookie Awards 2024 https://www.therookies.co/contests/groups/rookie-awards-2024 Discover the 14th Annual Rookie Awards – your gateway to the creative industries! Showcase your talent, win amazing prizes, and get noticed by the world's top studios. Whether you're a student, hobbyist, or emerging artist, this global platform is your chance to shine. Join now! Are you a budding talent in games, animation, visual effects, immersive media, motion graphics, 3D visualization, or related creative industries? The Rookie Awards is the perfect platform to showcase your skills on a global stage. Whether you’re a student, self-taught artist, or hobbyist, this competition is your opportunity to earn an industry ranking certificate and potentially kickstart your dream career. Contest Overview The Rookie Awards is all about connecting your talent with the right industry. Your journey starts by selecting a category that aligns with the industry you aspire to join. Think of it as your first step towards a career in games, animation, visual effects, or any of the creative fields we celebrate. Groups & Categories Explore our 4 main contest groups, each offering unique opportunities: Rookie of the Year: 10 categories for individual artists across various industries. Film of the Year: For individuals or teams in film production. Game of the Year: Categories dedicated to game development. Career Opportunities: Connect with production studios for potential career paths. Who Can Enter? Entrants must be 18 years or older and have less than one year of professional experience in any of the creative industries featured in the contest categories, as of June 1, 2024. For further information please read the rules. Important Dates Entries Open: 7 March 2024 Entries Close: 1 June 2024 Finalists Announced: 9 July 2024 Winners Announced: 27 July 2024
-
Marvelous Designer 2024.0 New features in Marvelous Designer 2024.0 include support for soft body simulation. The implementation is intended to improve the realism of cloth-to-skin contacts, so it’s a simple skin elasticity system rather than a fully fledged simulation toolset. It currently only works for Marvelous Designer’s default avatars, and animations with skin deformations can currently only be exported as Alembic caches Other changes to the physics features include support for multiple controllers for wind effects. There are also useful changes when integrating Marvelous Designer into a production pipeline. When importing a 3D character from other DCC software, Marvelous Designer can now automatically convert it to one of its native avatars, matching the default rig structure. It is also now possible to scale custom avatars manually when importing in FBX format as well as OBJ format. For export, Marvelous Designer’s new EveryWear system, shown in the video above, streamlines the process of optimizing clothing, particularly for use in games or real-time applications. It automates key tasks, including fitting and rigging a garment to an avatar, baking textures and creating LODs. In addition, the update introduces Marvelous Designer LiveSync, a live link to Unreal Engine. It removes the need to export data manually when using Unreal Engine as a real-time renderer, making it possible to preview changes to a garment inside Marvelous Designer in real time. The link is still a work in progress, and does not currently work on macOS, or fully support fur, or accessories like earrings. You can find a list of current limitations in the online manual. It is also now possible to record animation directly from the Marvelous Designer viewport, with users able to export animations of 3D characters at up to 1080p resolution. Other changes include a new Puckering system, which generates fine wrinkles in 3D clothing: most obviously, along seams, but it works along any internal line segment in a pattern part. There are also a number of smaller improvements to workflow when creating garments, adjusting topology, setting up prop collisions, and exporting data in USD and Alembic format. Marvelous Designer 2024.0 is available for Windows 10+ and macOS 12.0+. It is rental-only. Personal subscriptions cost $39/month or $280/year. Enterprise subscriptions cost $199/month or $1,700/year for a node-locked licence, and $2,000/year for floating licences. https://support.marvelousdesigner.com/hc/en-us/articles/29074012807449-Marvelous-Designer-2024-0-New-Feature-List Substance 3D add-on 2.0 for Blender The 2.0 update is the first to the plugin since it was officially released last year, but it’s largely an under-the-hood update. Adobe has “completely refactored” the plugin to improve performance, and to make it easier to add new features in future. In the short term, perhaps the most significant change is that the plugin supports Blender 4.0, the latest version of the software, and that it now has a detailed tutorial video, embedded above. Other changes – not listed in the release notes, but confirmed by Adobe on Instagram – include support for value and string outputs, and the option to change bit depth on texture outputs. The Substance 3D add-on for Blender is compatible with Blender 3.0+ on Windows, Linux and macOS. It’s a free download, and does not require you to register on Adobe’s website. To use the plugin, you will also need to install Adobe’s Substance Integration Tool, https://substance3d.adobe.com/plugins/substance-in-blender/ Ragdoll for Blender 1.0 Ragdoll Dynamics streamlines the process of adding realistic physics to character animation. Artists can establish the timing of a shot, feed in the basic keyframes, and have Ragdoll add secondary motion automatically – for example, for hair, clothing, or even muscle dynamics. The software is lightweight enough to make it possible to work in real time, and unlike conventional simulations, output is deterministic, meaning that each playthrough is identical. Other key features include Live Mode, which speeds up the process of posing characters by letting animators drag geometry directly in the viewport, rather than having to use rig controls. Since then, Imbalance has rearchitected the software from a Maya plugin to a standalone application with separate integrations to other DCC software: initially Maya, and now Blender. The Ragdoll for Blender plugin itself is open-source, but to do anything practical with it, you need the commercial Ragdoll Core application. The new Blender integration supports most of the same features as Maya, including Live Mode. Those not currently supported include Force Fields and the option to import physics as a base from which to work. You can see a feature comparison table in the release announcement. It is currently only available on Windows and Linux, not macOS: Imbalance is working on support for Apple’s Metal API, now used for viewport rendering in Blender. According to Imbalance, the Blender version – much of which is written in Python rather than C++ – has around “0.6x performance compared to Maya”. Ragdoll for Blender 1.0 is compatible with Blender 3.4+ on Windows and Linux only. For individual artists, perpetual Freelancer licences cost $199. For studios, Complete licences cost $799 or $79/month for node-locked licences; $999 or $119/month for floating licences. Unlimited licences, which add a Python API, and advanced features like physics export, cost $1,999 or $199/month for node-locked licences; $2,999 or $299/month for floating licences. The source code for Ragdoll for Blender and its bpx library is available under a MIT license. https://ragdolldynamics.com/ Textile Generator v2 Our Textile Generator v2 allows you to generate an infinite quantity of textile textures for your engine using popular weave types such as plain, twill, chevron, satin and more. The generator comes with presets that you can use as a starting point to customize your own textile such as denim, silk satin, undyed linen, polyester, knit wool and many more. The generator is an .SBSAR file that you can load using Substance Player or anywhere else where .SBSAR files can be loaded. Generate repeating textile textures up to a resolution of at least 8192×8192. Exports all commonly used textures such as albedo, normal, height, roughness and many useful masks. Controls over thread count, thread stiffness and customizable weave types. Threads that have fibers and plys with controls for size, spin, quantity, color and more. Starting presets that allow to quickly generatre denim, wool, linen, polyester and other common textiles. Slubs, knots, gaps, wrinkles, bumps and discoloration. Support Outgang and fund the further development of the Textile Generator. Personal/Indie – 50.00$ USD Single-Project Studio License – 400.00$ USD https://outgang.studio/downloads/textile-generator-pro/ Unity Muse Unity has now updated Texture, rolling out a new AI model for texture generation, Photo-Real-Unity-Texture-2. Like its predecessor, Photo-Real-Unity-Texture-1, it’s a custom version of Stable Diffusion, “trained entirely on data and images that Unity owns or has licensed”. You can read more about how it was created this blog post: Unity doesn’t specify exactly what training set was used, but says that it “[did] not comprise any data scraped from the internet”. The update focuses on “improving material types that commonly occur in games”, including wood, bricks, concrete, leather, metals, gravel and soil, with the new model generating texture maps with greater detail and color consistency. In addition, generated heightmaps are now 16-bit by default. Unity Muse is a subscription service. It is currently in early access, and costs $30/month. On its full release in Spring 2024, Unity will introduce additional credit-based pricing. By default, Unity collects user data to help train AI models, but according to the online FAQs, you can opt out of sharing content data. https://blog.unity.com/engine-platform/ai-model-improvements-higher-quality-muse-textures Particle Illusion Pro Part of Boris FX’s Continuum suite of effects plugins for compositing and video editing software, Particle Illusion is a GPU-accelerated particle generator for motion graphics and VFX work. It lets users create 3D particle effects using a node-based workflow to combine emitters, forces and deflector objects, with the option to apply fluid dynamics, or generate particle trails. In 2020, Boris FX released a free feature-limited standalone edition of the software, which has received regular updates ever since. Particle Illusion Pro takes that free edition and removes the two main restrictions: that you couldn’t composite particles directly into source footage, and that it didn’t support tracking data. The Pro edition can import background video in standard file formats, and can import tracking data for emitters or cameras from Boris FX’s Mocha and SynthEyes software. In addition, the current release, Particle Illusion Pro 2024, includes the features added to the integrated version of Particle Illusion in Continuum 2024. Key changes include the option to create particle sprites using generative AI, and to make particle lines true 3D objects. Particle Illusion Pro can still be run for free indefinitely in trial mode, which disables the import of background video and tracking data. Like the orignal free edition, output is not watermarked, although trial mode is “intended for non-commercial or personal use only”. Boris FX told us that it had introduced the Pro edition because “we received many requests from customers … who were looking to use Particle Illusion on commercial/paid projects and needed full access to our technical support”. Particle Illusion Pro 2024 is compatible with Windows 10+ and macOS 10.15+. The software is available rental-only. Subscriptions cost $25/month or $195/year. Particle Illusion is also available as part of Continuum, for which perpetual licenses cost from $695 to $1,995 according to the host applications; or via the $399 Continuum Particles Unit. https://borisfx.com/release-notes/particle-illusion-pro-2024-v17-0-3-release-notes/ HDR Light Studio 8.2 First released in 2009, HDR Light Studio is a multi-platform real-time lighting design tool. Although its original use was authoring synthetic HDRIs, it has since evolved into a much broader toolset, including the option to generate and control HDR-textured 3D area lights in many of its host applications. Noteworthy features include LightPaint, which lets users position highlights on the surface of a 3D model by clicking on its surface in the render view. Connection plugins are available for most major DCC applications, including 3ds Max, Blender, Cinema 4D and Maya, and several key CAD tools, creating a live link to the 3D software. It is used in industries including motion design and VFX, but its key use is product visualisation. Key changes in HDR Light Studio 8.2 include an update to the signature LightPaint system. The Shadow and Shade modes work in similar ways to the standard Illumination mode, but Shadow mode places the location clicked in the render view in the shadow of a light. Shade mode places a light on the HDRI on the opposite side of the model. Other changes include a new Reflection Filter, for mirroring lights on the generated HDRI, with the option adjust the intensity, alpha and blending of the reflections. The Scrim Light introduced in HDR Light Studio 8 gets an Infinite Plane setting for softer falloff. There are also changes to lighting presets, including a new Audition mode, with users able to mouse over a thumbnail in the Library panel and see the preset applied in the render view. In addition, 405 of HDRMAPS’ free 2,048 x 1,024px HDRIs are now bundled with the software. HDR Light Studio 8.2 is available for Windows 10+, CentOS 7.9+ Linux and macOS 11.4+ Connection plugins are available for a range of common DCC and CAD tools: you can see a table of versions supported and HDR Light Studio features available here. The software is rental-only. Indie subscriptions, for users with revenue under $100,000/year, cost $220/year. Pro subscriptions cost $445/year for a node-locked license, $975/year for a floating license. Both include 12 of the connection plugins, including all those for DCC apps. Automotive subscriptions cost $1,495/year for a node-locked license, $2,245/year for floating, and add connections for DeltaGen, Patchwork3D, VRED and the new Unreal Engine connection https://help.lightmap.co.uk/hdrlightstudio5/releasenotes-hdrlightstudio-8-2.html Yeti 5.0 Key changes in Yeti 5.0 include a new hair shader for Maya’s Viewport 2.0. According to Peregrine Labs, it “aims to match commonly used production hair shading techniques” and features controls for melanin and IOR. It is also now possible to import Maya’s XGen Interactive Grooms into the Yeti graph, and all of Yeti’s Maya nodes now support soft selection and weighted transforms. Under the hood, the Yeti graph gets a new Groom Sampler, enabling any parameter to be controlled by a groom’s paintable attribute more directly. The software is now available for Windows and Linux only, macOS support having been discontinued with Yeti 4.2. Yeti 5.0 is available for Maya 2023+, running on Windows 10+, RHEL, CentOS and Rocky Linux 8.5. Indie licenses – one perpetual node-locked workstation licence and one render licence – are available to artists with revenue under $100,000/year, and cost $329. Studio licenses – one perpetual floating workstation licence, plus five render licences – cost $699. Further packs of five render licences cost $399. https://peregrinelabs.com/blogs/news/yeti-5 Modo 17.0 Foundry’s marketing material for Modo 17.0 focuses on performance first, new features second. Under the hood, key changes include a new system of View Objects (VOs), which allows for multi-threading when drawing to the OpenGL viewports, with buffers for the drawing being generated in background threads, then provided to the OpenGL draw calls when ready. The change should improve viewport interactivity, particularly when playing back animation or evaluating complex rigs, with the launch video showing increases in frame rates from 3-7x. In Modo 17.0, the software can use two threads at once, but future updates “should allow for multiple background threads”. In addition, Modo’s modeling tools are being rewritten to follow a faster “incremental code path” for operations that do not create geometry, such as moving vertex positions. The change currently applies to 10 modeling tools and MeshOps, including Polygon Extrude, edge tools, and the Clone Effector, plus UV Unwrap. According to the launch video, speed improvements can be “as high as 30x for individual tools”, although most of the examples in the video show speed boosts of around 3-10x. For Mac users, Modo 17.0 also features a new Apple Silicon build, bringing “up to 50% speed improvements” on current Macs with ARM-based M1, M2 and M3 processors. Modo is one of the last major DCC applications to get native Apple Silicon support, and its development doesn’t seem to have been entirely straightforward. Some features are unavailable in new version “due to third-party technology not being compatible with Mac ARM”, including the IKinema full-body IK and AxF material libraries. Users needing the features affected, which include the Pose Tool and animation retargeting, are being advised to use the old Intel build, which runs on current Macs using Rosetta emulation. Plugins without ARM support will also require the Intel build. Foundry will update its own NPR kit to support ARM processors during the Modo 17 Series of releases, and is “investigating solutions” for key third-party CAD import/export tools like nPower Software’s Power Translators and Power SubD-NURBS plugins. Another key change that Foundry announced last year was the mothballing of Modo’s mPath renderer in favor of a new API to connect the software more easily to external renderers. Modo 17.0 now ships with the Prime edition of its OctaneRender integration plugin, making it possible to use the GPU renderer for free on a single GPU on Windows and Linux in addition to Otoy’s existing macOS trial edition. Network rendering and rendering on multiple GPUs require a paid Studio+ subscription. Modo is the latest DCC application to ship with OctaneRender Prime, following LightWave Digital’s bundling of its own Octane integration with LightWave 2023 last year. Updates to existing features include improvements to the Advanced Viewport display mode. Users can now toggle viewport textures without the need to use the Shader Tree, and environment lighting has been decoupled from scene lighting. Changes to modeling tools include support for falloffs in the new PolyHaul tool, support for clones and rounded corners in Primitive Slice, and for partial circles in Radial Align. The Mesh Cleanup system gets a new Fix Gaps option for meshes with co-linear vertices. You can find a complete list of workflow and UX improvements via the links below. Modo 17.0 is available for 64-bit Windows 10+, Rocky Linux 9, and macOS 12.0+. Foundry stopped selling new perpetual licences of Modo in 2021, though maintenance contracts are still available for users with existing perpetual licences. Individual subscriptions now cost $89/month or $719/year, up $18/month since the previous release. Studio subscriptions are priced on enquiry. https://learn.foundry.com/modo/17.0/content/release_notes/17/modo_17.0v1_releasenotes.html MoonRay 1.5 Open-sourced last year, along with Arras, DreamWorks’ in-house cloud rendering framework, MoonRay is a high-performance Monte Carlo ray tracer. It was designed with the aim of keeping “all the cores of all the machines busy all the time”, and has an hybrid GPU/CPU rendering mode with “100% output matching” to CPU rendering. MoonRay is capable of both stylized and photorealistic output, and has the key features you would expect of a VFX renderer, including AOVs/LPEs, deep output and Cryptomatte. It comes with a Hydra render delegate, hdMoonRay, making it possible to integrate MoonRay as a viewport renderer in DCC applications that support Hydra delegates, like Houdini and Katana. MoonRay 1.5 introduces support for the CY2023 specification of VFX Reference Platform, and for Rocky Linux 9, on which MoonRay can now be built, in addition to CentOS. According to the release notes, CY2023 support makes it possible to use hdMoonRay in Nuke 15, the current version of Foundry’s compositing software. The update also features the initial implementation of a new adaptive light sampling scheme. Other new features since the release of MoonRay 1.0 include a new ramp control for volumes, and a telemetry overlay system. The software also now supports GPU render denoising via Open Image Denoise 2, and XPU mode – hybrid CPU/GPU rendering – is now MoonRay’s default render mode. In addition, there is now a separate development branch of MoonRay with support for VR rendering via the PresenZ volumetric format. MoonRay is available under an open-source Apache 2.0 licence. The software can be compiled from source on Linux only. You can find a list of dependencies and build instructions in the online documentation. It requires a x86-64 CPU with support for AVX2, so it should run on any recent AMD or Intel CPU. GPU acceleration is based on CUDA and OptiX and requires a Nvdia GPU. https://github.com/dreamworksanimation/openmoonray/releases/tag/openmoonray-1.5.0.0 Affinity 2.4 Those using RAW will notice a best-ever performance from the SerifLabs RAW processing engine, which has been updated to include more than 50 additional RAW formats. It now supports: Apple iPhone 13, 14 (all types), 15 (all types) Canon EOS R8 Nikon Z8 Panasonic DC-GH6 Leica Q3 and M11 Monochrom Fujifilm GFX 100 II Hasselblad CFV-100 OM Digital Solutions OM-1 Mark II DJI Mavic 3 Pro (drone) And many more! The addition of 32-bit HDR PNG 3rd Edition to existing True HDR support for Mac and Windows, means Affinity Photo meets all the key requirements for high dynamic range on-screen graphics—a measure implemented following consultation with leading sports broadcasters in North America—making Affinity the first design software package to deliver this capability! It also enables Affinity Photo users to export images for HDR display in Google Chrome (with other browsers expected to follow). In this video, product expert James walks you through the process of importing and exporting HDR PNG files in Affinity Photo. All Affinity apps have alignment functionality that lets you easily align and distribute layers as well as unify their properties, such as scaling and rotation. You will now see three new choices to make all items in your current selection adopt the same width, height or rotation, which makes precision alignment even quicker and easier. James demonstrates how useful these new options can be in this video. The States Panel in Affinity Photo is now available in Affinity Designer and Publisher! This feature has a lot of practical uses as it allows you to quickly switch between different variations of a project or compare design choices by capturing the current visibility states of your layers across your document. You can also use queries based on various criteria to toggle layer visibility and make selections across multiple layers and artboards simultaneously. Product expert Katy explains how you can use Layer States to select, show and hide layers based on criteria such as layer type and name, including the use of Regular Expressions for more advanced filtering in this video. Affinity Designer now supports both DWG and DXF export, which is great news for architects and other designers! It means that outlines created in Affinity can now be easily exported for use in various CAD applications as well as utilities for things such as vinyl cutters, plotters and CNC tools. Blender 4.1 Gaea 2.0 Mocha Pro 2024 The major new feature in Mocha Pro 2024 is a rewritten version of the Camera Solve module that integrates the core solver from SynthEyes, which Boris FX acquired last year. The change should make it possible to combine SynthEyes’ robust camera solves with the flexibility of Mocha Pro’s planar tracking tools: one use case that Boris FX identifies is to use roto splines to exclude parts of a shot that could throw off a track, like occlusions. It is also possible to convert Mocha’s PowerMeshes to projected static 3D meshes in the scene. The solver has its own 3D viewer, making it possible to see the results of a solve directly inside Mocha without exporting, and its own navigation controls. Another key feature of the rewritten Camera Solve module is USD support. Users can now import 3D models from USD assets to visualize draft results overlaid on a scene; or export USD data to software that supports Pixar’s Universal Scene Description format, including 3ds Max, Blender, Cinema 4D, Houdini, Maya and Nuke. It is also possible to export the results of a 3D solve to SynthEyes for more detailed match-moving; to export 3D data in FBX format; and to export nulls to After Effects. Updates to the existing tracking and rotoscoping tools include new Linear and Constant keyframe options for stepped tracking, with the option to Skip or Step over frames. A new Extrapolate Track feature continues tracking when a track goes off-screen or behind another object, by extrapolating from previous frames. There are also a number of performance improvements, with Insert rendering now “up to 15x faster”, and Python scripts now running “up to 6x faster” in the Script Editor. Mocha Pro 2024 is available as a standalone application, and as Adobe, Avid and OFX plugins. The standalone is compatible with Windows 10+, RHEL/CentOS 7-9 Linux and macOS 10.15+. The plugins are compatible with After Effects and Premiere Pro CC 2014+, HitFilm Pro 2017+, Media Composer 8+, Nuke 8+, Flame 2020+, Fusion Studio 8+, Silhouette 2020+ and Vegas Pro 14-19. A new licence costs $695 for the Adobe, Avid or OFX plugins, $995 for all the plugins, or $1,495 for the standalone edition plus all of the plugins. https://borisfx.com/release-notes/mochapro-2024-release-notes/ KeyShot 2024 KeyShot is a standalone CPU and GPU-based ray tracing renderer with integration plugins for a range of popular DCC and CAD applications. It has an intuitive workflow, intended to enable non-specialist artists to create photorealistic renders of imported 3D models, and also includes simple technical animation capabilities. Although it is used by entertainment artists to create portfolio renders, having previously been the de facto third-party renderer for ZBrush, its key market is product visualization. KeyShot 2024.1 is a performance and workflow update, with only two new features listed in the release notes. The main one is the new built-in Image Sharpening effect shown in the video above, intended to reduce the need to post-process renders in image-editing software. Under the hood, KeyShot has been updated to use OptiX 8, the latest version of NVIDIA’s GPU ray tracing API, improving median render performance in GPU Mode “around 12%”. Outside the core application, performance of the KeyShot Web Viewer, used to view online content created by the KeyShot Web add-on, has also been improved. eyShot 2024.1 is available for Windows 10+ and macOS 11.7+. Integration plugins are available for a range of DCC and CAD tools, including 3ds Max, Blender, Cinema 4D and Maya. The software is available rental-only. KeyShot Pro subscriptions cost $1,188/year. KeyShot Web and network rendering are available via separate subscriptions. https://www.keyshot.com/blog/introducing-keyshot-2024/ Audio2Face Although Audio2Face already supports Reallusion characters and can stream the facial animation it generates to other apps, the new plugins greatly simplify workflow. The first plugin, CC Character Auto Setup, exports CC3+ characters from Character Creator or iClone to Audio2Face, “condensing the manual 18-step process into a single step”. The second plugin, Audio2Face (A2F) for iClone, transfers facial animation generated by Audio2Face back to iClone, and includes controls for tweaking or smoothing the animation. The workflow provides an alternative to iClone’s built-in AccuLips automated lip sync system. The Audio2Face for iClone plugins are compatible with iClone 8.4+, Audio2Face 2023.2+ and Character Creator 4.4+. They’re a free download. To use them, users also need the free Character Creator and iClone Omniverse Connectors. Character creator and iClone are available for Windows 7+. New licences have MSRPs of $299 and $599, respectively. Audio2Face is available for Windows 10+, and is free as part of Omniverse. https://www.reallusion.com/iclone/nvidia-omniverse/Audio2Face.html Kiri Engine To that, Kiri Engine adds support for new 3D reconstruction technique 3D Gaussian Splatting. Like photogrammetry, it begins by generating a point cloud of a 3D object or scene from a set of source photos, but rather than converting the point cloud to a textured mesh, it converts it to gaussians, using machine learning to determined the correct color for each one. The result is a high-quality and potentially fast-rendering 3D representation of the object or scene being scanned. Although other 3D scanning apps like Polycam support the creation and viewing of 3D Gaussian Splats in their iOS editions – Polycam also has a free online splat creator and viewer – Kiri Innovations claims that Kiri Engine is the first Android app to do so. Although in Kiri Engine 3.0, it was only possible to generate HTML code to embed raw 3DGS data in a website, subsequent updates have made it possible to export the scan. The 3.3 and 3.4 updates made it possible to export the 3DGS data in PLY format. Although few DCC applications currently support 3DGS natively, there are free add-ons for importing and rendering 3DGS data in Unity and, more recently, Unreal Engine. For DCC applications like Blender, the current 3.7 update adds experimental support for generating a mesh from the 3DGS data and exporting it in OBJ format. New cropping tools also make it possible to remove unwanted gaussians from the scan. Kiri Engine 3.7 is available for Android 7.0 and iOS 15.0. The new 3DGS functionality is only available to users with paid Pro accounts. The app itself is free to download. Free Basic accounts include photogrammetry, Lidar scanning and Object Capture, and can export three 3D scans from the cloud per week. Pro accounts cost $14.99/month or $59.99/year, make it possible to export an unlimited number of scans from the cloud, and unlock advanced features. https://play.google.com/store/apps/details?id=com.kiriengine.app https://apps.apple.com/us/app/kiri-engine-3d-scanner/id1577127142 Particle Illusion Pro Part of Boris FX’s Continuum suite of effects plugins for compositing and video editing software, Particle Illusion is a GPU-accelerated particle generator for motion graphics and VFX work. It lets users create 3D particle effects using a node-based workflow to combine emitters, forces and deflector objects, with the option to apply fluid dynamics, or generate particle trails. In 2020, Boris FX released a free feature-limited standalone edition of the software, which has received regular updates ever since. Particle Illusion Pro takes that free edition and removes the two main restrictions: that you couldn’t composite particles directly into source footage, and that it didn’t support tracking data. The Pro edition can import background video in standard file formats, and can import tracking data for emitters or cameras from Boris FX’s Mocha and SynthEyes software. In addition, the current release, Particle Illusion Pro 2024, includes the features added to the integrated version of Particle Illusion in Continuum 2024. Key changes include the option to create particle sprites using generative AI, and to make particle lines true 3D objects. Particle Illusion Pro can still be run for free indefinitely in trial mode, which disables the import of background video and tracking data. Like the orignal free edition, output is not watermarked, although trial mode is “intended for non-commercial or personal use only”. Boris FX told us that it had introduced the Pro edition because “we received many requests from customers … who were looking to use Particle Illusion on commercial/paid projects and needed full access to our technical support”. Particle Illusion Pro 2024 is compatible with Windows 10+ and macOS 10.15+. The software is available rental-only. Subscriptions cost $25/month or $195/year. Particle Illusion is also available as part of Continuum, or the Continuum Particles Unit. Perpetual licenses of Continuum cost from $695 to $1,995 according to the host applications. Subscriptions cost from $25/month to $87/month or $195/year to $695/year. A perpetual license of the Continuum Particles Unit costs $399. https://borisfx.com/release-notes/particle-illusion-pro-2024-v17-0-3-release-notes/ Tooll 3.6 TOOLL3 is an open source software to create realtime motion graphics. We are targeting the sweet spot between real-time rendering, graph-based procedural content generation and linear keyframe animation and editing. This combination allows… artists to build audio reactive vj content use advanced interfaces for exploring parameters or to combine keyframe animation with automation Technical artists can also dive deeper and use tool for advanced development of fragment or compute shaders or to add input from midi controllers and sensors or sources like OSC or Spout. https://github.com/tooll3/t3/releases https://github.com/orgs/tooll3/projects/1/views/2 Adobe Substance 3D Generative AI continues to advance at a rapid pace, with text-to-image generators now able to handle text and video, to an extent. One of the next frontiers is 3D work, and Adobe has just announced a potentially game-changing move with the first addition of tools driven by Firefly AI in Adobe Substance 3D, it's suite of 3D programs. Firefly, Adobe's own suite of generative AI tools, is already powering text-to-image features in the likes of Photoshop and Illustrator. Now it's coming to Substance 3D Sampler and Stager in a move that Adobe says will boost efficiency and creativity in 3D workflows for industrial designers, game developers, VFX professionals and other content creators. Here's what's included. The integration of Firefly AI involves two Substance apps for now: Substance 3D Sampler and Stager. In the former, Adobe's added a new Text-to-Texture tool, which allows creators to use written prompts to create photorealistic textures that can then be tweaked and applied to 3D objects. Meanwhile, Substance 3D Stager 3.0 (beta) includes a Firefly Generative Background feature. This will allow users to create detailed 3D scene backgrounds from text descriptions, with lighting automatically adjusting to the entire scene. Adobe says that the new capabilities accelerate the creative review process, and save considerable time. It says that for industrial designers and professionals in the gaming and VFX industries, the updates mean quicker ideation, more creative freedom, and the ability to generate high-quality, realistic textures and environments at a fraction of the usual time and cost. Marketing professionals and content creators can also use the new tools to produce high-fidelity visuals and animations for brand presentations. Sebastien Deguy, vice president, 3D & Immersive, said in the press release: "Adobe has always looked for new and innovative ways to put cutting-edge creative tools in the hands of designers and other artists. By integrating Firefly's generative AI capabilities into Substance 3D, we're not just streamlining the creative process – we're unlocking new realms of creative possibility with new generative workflows designed to unlock human imagination, not replace it." Substance 3D Modeler 1.8 Substance 3D Modeler 1.8 overhauls the software’s camera system, adding a new Viewport panel, accessible both while modeling and in render mode. From it, users can add new cameras to scene, switch between cameras, and in desktop mode, toggle between orthographic and perspective views. It is also possible to use the Viewport and Node properties panels to adjust camera properties, including FOV, aspect ratio and depth of field, When working in VR, there is a new smooth camera type, intended for recording demos. Several key modeling tools have been updated to impprove workflow and performance, includinhg Buildup, Crease, Flatten and Inflate. Inflate now supports alpha textures, and Flatten gets a toggle to lock the flattening plane in planar or continuous mode. Other workflow improvements include the option to rename materials within Substance 3D Modeler, and updates to keyboard shortcuts. Substance 3D Modeler 1.8 is compatible with Windows 10+, and these VR headsets. It is available via Substance 3D Collection subscriptions, which cost $49.99/month or $549.88/year. Perpetual licences are available via Steam, and cost $149.99. https://helpx.adobe.com/substance-3d-modeler/release-notes/v1-8-release-notes.html Autodesk acquires PIX Autodesk has completed its acquisition of production management solution PIX from previous developer X2X. The financial terms of the deal were not disclosed. The company now aims to connect PIX with Flow, its ‘industry cloud’ for media and entertainment, to strengthen its capabilities to ingest and manage on-set data. The deal, initially announced last month, will see Autodesk acquire PIX from X2X. Described as “the industry-standard tool for securely sharing dailies and cuts”, PIX is a production management and virtual collaboration platform. As well as enabling film makers to upload the digital assets for a production to a shared location for review, PIX is intended to enable creatives working off set to influence the work of the on-set crew. Autodesk now aims to connect PIX with Flow, its new cloud-based platform for connecting “people, workflows, and data across the entire production lifecycle”. According to Autodesk, “Much of the data … that our customers rely on is generated on set, yet because [on-set work and post-production have] historically been disconnected, it often doesn’t make it all the way downstream … causing considerable inefficiency.” “The addition of PIX to Autodesk will make it easier to share data captured on set with studio executives and production teams .. linking a previously disparate workflow. “Connecting PIX’s production management solution to Flow will ensure a better flow of data to all stakeholders and help reduce inefficiency.” Flow will become the latest Autodesk acquisition to be connected to Flow, the firm having previously announced that it planned to integrate Moxion, the digital dailies platform it acquired in 2022. https://adsknews.autodesk.com/en/news/autodesk-closes-pix-acquisition/ NVIDIA’s Omniverse announcement Announced in 2022, Omniverse Cloud is a set of cloud services built around Omniverse, NVIDIA’s design collaboration platform. Initial offerings included the option to stream the viewport output of native Omniverse apps like USD Composer and USD Presenter from the cloud to users’ devices. The new APIs will enable developers to integrate core Omniverse technologies into their own products, particularly for manipulating data in USD format, around which Omniverse is built – for example, to use Omniverse’s RTX Renderer to visualize USD assets. The are five new Omniverse Cloud APIs: ● USD Render — generates fully ray-traced RTX renders of OpenUSD data. ● USD Write — lets users modify and interact with OpenUSD data. ● USD Query — enables scene queries and interactive scenarios. ● USD Notify — tracks USD changes and provides updates. ● Omniverse Channel — connects users and tools to enable collaboration across scenes. Although the APIs could be used in entertainment or visualization products, all of the early adopters cited by NVIDIA come from other industry sectors. They include Dassault Systèmes, which plans to use the APIs in its 3DEXCITE applications for industrial content creation, and construction and geospatial tech firm Trimble, which plans to create “interactive Omniverse RTX viewers” with Trimble model data. Other early use cases include Product Lifecycle Management and industrial digital twins. In addition, NVIDIA showed ray traced visuals being streamed from an Omniverse-developed app to Apple’s high-end Vision Pro VR headset in real time at “full fidelity”. The prerecorded demo showed a designer wearing the Vision Pro receiving visuals from a car configurator developed by CGI firm Katana Studio on the Omniverse platform. The process used a novel hybrid rendering workflow, combining local rendering on the Vision Pro with cloud rendering on NVIDIA;s Graphics Delivery Network. NVIDIA pitches it as an example of how Omniverse could be used with the Vision Pro to “deliver high-fidelity visuals without compromising … massive, engineering fidelity datasets”. Omniverse Cloud APIs will be offered “later this year” on Microsoft Azure as self-hosted APIs on NVIDIA A10 GPUs or as managed services deployed on NVIDIA OVX systems. https://nvidianews.nvidia.com/news/omniverse-cloud-apis-industrial-digital-twin Airen 4D An AI render enhancing and texture generator plugin for C4D. Starting at $199.99 when releashed in 26 March. https://merk.gumroad.com/l/airen4d_win?layout=profile LATTE3D LATTE3D (Large-scale Amortized Text-To-Enhanced3D Synthesis) is NVIDIA’s latest text-to-3D AI model: its third in a year, following on from Magic3D and ATT3D. Each has improved on the previous model, increasing training speed, then output quality. With ATT3D, NVIDIA began training on multiple text prompts as well as multiple 3D assets, to account for the different ways in which a user might describe the object to recreate. The approach speeds up training over training on prompts individually, as was the case with Magic3D. LATTE3D also uses multiple prompts – for the work, NVIDIA generated a set of 100,000 possible prompts using ChatGPT – but improves the visual quality of the assets generated If you compare the demo assets generated by ATT3D and LATTE3D, the output from LATTE3D is noticeably crisper and more detailed. They’re still relatively low-resolution, but they’re getting to the point at which they could be used to block out a scene, or even be used as background assets. LATTE3D is primarily a proof of concept: NVIDIA hasn’t released the source code, and the model was only trained for two specific types of asset: animals and everyday objects. What is more significant is what it shows about the speed at which text-to-3D is evolving – and by extension, how soon usable publicly available text-to-3D services might arrive. At NVIDIA’s GTC 2024 conference, Sanja Fidler, the firm’s Vice President of AI Research, admitted that quality is “not yet near what an artist would create”, but pointed out how far things have come since Google announced its pioneering DreamFusion model in late 2022. “A year ago, it took an hour for AI models to generate 3D visuals of this quality — and the current state of the art is now around 10 to 12 seconds,” she said. “We can now produce results an order of magnitude faster, putting near-real-time text-to-3D generation within reach for creators across industries.” https://blogs.nvidia.com/blog/latte-3d-generative-ai-research/ FaceBuilder 2024.1 Originally released for Nuke in 2018, and later ported to Blender, FaceBuilder provides an easy way to create textured models of actors’ heads from photographs. The plugin automatically generates 3D geometry matching the source images, with users able to adjust the results by dragging ‘pins’ around in the 3D viewport. It works even from a single image, or photos of faces in non-neutral expressions. For facial animation, the plugin generates 51 built-in ARKit-compatible FACS blendshapes. FaceBuilder 2024.1 for Blender revamps the plugin’s UI and core workflow to streamline the process of creating a head. Key changes to the interface include grouping all of the mesh alignment controls in one place, with new buttons for rotating the head in the viewport to speed up work. Face detection and mesh alignment are now automatic for single-image-to-3D workflow, and creating textures for a 3D head from blended views now “requires just one click”. You can see the current workflow in the new official tutorial video, embedded above. FaceBuilder 2024.1 is available for Blender 2.80+ on Windows, Linux and macOS. The software is rental-only, with subscriptions also providing access to FaceBuilder for Nuke. Individual Freelancer subscriptions cost $18/month or $179/year; floating Studio subscriptions cost $499/year. https://keentools.io/version-history Ragnar Ragnar lets users control 3ds Max by typing in plain-text descriptions of what they want the software to do into the plugin’s Commands field. The plugin then generates MAXSCript based on the text prompt, and executes the script to modify the scene. The output can be further refined by entering a new text prompt. The process is context-sensitive, so the scripts generated by subsequent prompts take previous prompts into account. Chains of commands are grouped into broader ‘strategies’, which are linked to the user’s ID key for Ephere’s server, so they can be retrieved in later sessions, or on other computers. Ragnar is compatible with 3ds Max 2018+ running on Windows 10+. The plugin is a free download. To activate it, you then need to request a license key from Ephere’s Discord server. According to the product webpage, usage is priced on a pay-per-prompt basis, https://ephere.com/plugins/autodesk/max/ragnar/docs/1/Using_Ragnar.html https://discord.com/channels/707238078540021850/1195358039159623721 SayMotion Founded in 2014 by games industry veteran Kevin He, DeepMotion is one of pioneers in the field of AI motion capture, and paved the way for other recent browser-based mocap tools. With SayMotion, it has moved into the world of generative AI, letting users create animation clips by entering text descriptions of the movement they want to generate. Prompts follow a three-part structure: subject, action and detail. Subject describes the nature of the character being animated (for example, ‘woman’, ‘child’ or ‘swimmer’), and action desribes… well, the action being performed. Detail modifies the animation, by specifying the direction or orientation in which it should take place, and the style or emotion with which the character should carry it out. When chained together, they can generate quite complex sequences of actions. One example from the online documentation uses the prompt, ‘A person walks clockwise in an angry manner, and then reaches out with their left arm to grab the left arm of a chair, lowering themselves heavily into the seat. The initial animation can also be refined using SayMotion’s ‘inpainting’ system, which lets users extend, modify or blend animations with additional text prompts. The example in the starter guide shows the motion of bending down to pick up an object being layered onto an animation of a character walking in a circle. The system also supports some of the advanced options from Animate 3D, DeepMotion’s AI mocap service, including motion smoothing and foot locking. As well as DeepMotion’s readymade characters, or those created with its built-in avatar-generation tools, users can upload their own custom 3D characters to animate. Those from stock model sites like Mixamo, Sketchfab and TurboSquid should work, so long as they have standard humanoid bodily proportions. The animated character can then be exported in FBX, GLB or BVH formats, making it possible to use the animation in DCC applications like Blender or Maya and game engines like Unity and Unreal Engine. It is also possible to export video of the animation in MP4 format. DeepMotion is currently limited to generating full-body motion of human characters, although DeepMotion is considering adding support for facial and hand animation. The company also plans to add support for video prompts, generating a new unique animation based on SayMotion’s interpretation of a source video. SayMotion is currently in open beta. It should work with any standard desktop web browser. Support for mobile browsers is planned. During the beta period, all users get 2,000 credits per month. Generating an animation or MP4, downloading an animation, or using inpainting all consume a credit. Animation created with beta accounts is licensed for use in commercial projects. DeepMotion hasn’t announced a release date or final pricing for SayMotion, https://www.deepmotion.com/saymotion https://www.deepmotion.com/post/guide-to-saymotion--open-beta-unleash-the-power-of-text-to-3d-animation Lumion 2024.0 Lumion 2024.0 extends the capabilities of the hardware-accelerated hybrid ray tracing/rasterization render engine introduced in Lumion 2023.0. Key changes include a new temporal denoiser in Preview and Movie mode, making previews much less grainy, and support for ray traced subsurface scattering. Ray tracing can now be used with lot more assets from Lumion’s asset library, including glass, translucent materials, and over 2,200 ‘nature items’ like trees. It’s also much faster, thanks to a new Multiple Importance Sampling (MIS) algorithm: according to Lumion, ray traced video rendering is now “5x faster”. Other new features in Lumion 2024.0 include a landscape tiling toggle in the material editor to hide visible repeats in environment textures. Workflow improvements include a new vertical sidebar to free up screen space; grid overlays to help with scene composition; and eight new preset render styles. Pipeline changes include support for importing assets in glTF format, and support for batch file importing. There are also 300 new assets in the accompanying library, including 100 new plants and trees, 45 static characters, and 25 parallax interiors for buildings. Lumion 2024.0 is available for Windows 10+. The software is rental-only. One-year subscriptions cost $749/year for the standard edition; $1,499/year for the Pro edition. https://lumion.com/product/buy#compare-table https://support.lumion.com/hc/en-us/articles/13059592095644-Lumion-2024-0-Release-Notes Substance 3D Stager 3.0 Launched in 2021, Substance 3D Stager is a shot layout and rendering application. Aimed at designers as well as specialist 3D artists, it is intended to provide an intuitive workflow for creating photorealistic brand visualisations and product mockups. Users can lay out scenes using physics-based tools to place assets, and pick from libraries of readymade materials and ‘environment stages’ mimicking real-world studio lighting. The main new feature in Substance 3D Stager 3.0 is Generative Background, a new system for adding a background to a scene without the need to find a suitable stock image. Powered by Firefly, Adobe’s suite of generative AI tools, it enables users to create the background by entering a simple text description. Users can adjust the output by selecting presets for color, tone, lighting and composition, with Stager generating four variant images from which to pick. The software’s Match Image system then automatically adjusts the perspective to that of the camera, and matches the environment lighting in the scene to the image. Stager automatically attaches Adobe’s Content Credentials to assets generated using Firefly, indicating in the metadata that they are AI-generated. The update also adds an interactive denoiser when editing a scene in Design mode, making it possible to see high-quality viewport previews more quickly. The real-time rendering engine also now handles translucency better, displaying refractions when rendering objects like glass, plastics and liquids. Other new features include tools for creating 3D text and physical lights. Updates to existing features include a redesigned Select Tool with a “screen-oriented bounding box design for easier compositing at lower angles”. It is also now possible to group cameras, and to save custom scenes as templates. Substance 3D Stager 3.0 is in beta. Adobe hasn’t announced a final release date. The current stable release, Substance 3D Stager 2.1, is available for Windows 10+ and macOS 11.0+. The software is now rental-only, Adobe having discontinued the old Steam perpetual license at the start of the year. It is available as part of Adobe’s Substance 3D Collection subscriptions, which cost $49.99/month or $549.88/year for individuals; $1,198.88/year for studios. https://helpx.adobe.com/substance-3d-stager/release-notes/version-3-0.html Cascadeur 2024.1 Cascadeur 2024.1 introduces an interesting new feature: animation unbaking. It converts imported animation in which each frame is a keyframe, such as unfiltered mocap data, into a set of keyframes and interpolations, making it much easier to edit. Users can process the data automatically, or place keyframes and choose interpolation types manually for finer control. The unbaked animation can then be edited using Cascadeur’s native tools, including using AutoPhysics to smooth unwanted noise, or to add realistic secondary motion. Cascadeur 2024.1 also adds support for animation retargeting to the software, letting users transfer animation from one character to another with different bodily proportions. According to Nekki’s blog post, the process is a simple copy-paste operation, and works with any humanoid characters. Other changes include an experimental auto-interpolation system intended to generate smoother animations. In addition, AutoPhysics receives a “massive upgrade”, enabling it to be used in animations in which characters interact with the environment: for example, climbing stairs. The workflow can be used with moving as well as static objects: the video above shows a character on a moving platform, with realistic inertia as the platform comes to rest. AutoPosing gets new controllers for more detailed posing, including finger posing, and support for weapons and other props. There are also a number of other workflow improvements, including the option to detach viewports: you can find a full list via the links below. Nekki has also “completely overhauled” its licensing model for Cascadeur. The software was previously free to individuals or teams with revenue under $100,000/year, although animation export was capped at 300 frames per scene and 120 joints per scene. The free edition is now available to anyone, but is no longer licensed for commercial use, and saves animation in its own .casc file format, making it impossible to use in other DCC apps. Users earning under $100,000/year can get the new $99/year Indie subscription, which is licensed for commercial use, but lacks advanced features like retargeting. The cost of Pro subscriptions rises from $300/year to $399/year ($49/month). Finally, the old $1,000/year Business subscription has been superseded by a Teams plan, pricing for which starts at $1,596/year for four users. All three paid subscriptions – Indie, Pro and Teams – are rent-to-own plans, with users qualifying for a perpetual license after one year. Nekki also plans to introduce a $39/year Educational edition for students and teachers in “a few weeks”, which will have the same features as the Pro edition. Cascadeur 2024.1 is available for Windows 10+, Ubuntu 20.04+ Linux and macOS 13.3+ https://cascadeur.com/help/category/218 Blender Extensions Canvas Acquires Seriff If it already seemed that Canva was out to challenge Adobe's dominance over design software, today's news leaves no doubt. The company is buying Serif's Affinity suite, a major subscription-free competitor to some of the main Adobe Creative Cloud apps. The Affinity suite is by no means as broad as Creative Cloud, but it comprises alternatives to three of Adobe's main programs: Photoshop, Illustrator and InDesign. Unlike Adobe's software, Affinity Photo 2, Affinity Designer 2 and Affinity Publisher 2 are available as one-off purchases rather than subscriptions, making them much more affordable in the long term. As a web-based design platform, Australia-based Canva has been growing rapidly in recent years. It's been adding a host of generative AI features and expanding its tools way beyond its core logo maker and original focus on social media. It's also been trying to broaden its appeal beyond amateur designers and small businesses to include creative professionals, a move that's cemented by today's announcement. Buying Affinity Designer, Photo, and Publisher from Serif means it now adds more fully-featured non-web creative apps to its portfolio. The three programs are available for Windows, Mac and iPad and offer many similar features to Adobe Illustrator, Photoshop and InDesign. Recently, the integration of Adobe Firefly AI-powered tools such as Generative Fill in Photoshop have given Creative Cloud apps an additional edge over Affinity's options. However, Canva has text-to-image AI tools of its own, so it will be interesting to see if its acquisition of Affinity will lead to such features being added to the programs to help them compete with Adobe's offerings. Serif launched in 1987 and built up a reputation for quality and reliability based on an award-winning range of software for PC – low-cost alternatives to high-end publishing and graphics packages. The focus of development then switched to a next-generation suite of lean, super-fast apps for creative professionals using Mac, Windows and iPad. Affinity Designer, the first Affinity app, was released in 2014. Affinity Photo followed in 2015 and the launch of Affinity Publisher in 2019 meant Serif boasted a full creative suite of applications covering photo editing, graphic design and desktop publishing. Affinity Version 2, launched in November 2022, represents a complete reimagining of the Affinity suite with new features and enhancements, plus a redesigned UI, for a fluid and powerful working experience. The launch at the same time of Affinity Publisher 2 for iPad means the entire Affinity suite is integrated across Mac, PC and iPad. Affinity products have won some of the biggest awards in the industry, gained plaudits from reviewers, and been adopted by millions of users worldwide. Serif employs around 90 people at its HQ in Nottingham, UK. https://affinity.serif.com/en-gb/press/newsroom/canva-statement/ https://www.canva.com/newsroom/news/affinity/?clickId=UXDWu3XPUxyPWKtWAsWCbSj3UkHUr40lHwXmxI0&irgwc=1 Babylon.js 7.0 Microsoft continues to support the Babylon.js development tools for the purpose of making complex web-based games. Today, the company officially announced the next version of the tools, Babylon.js 7.0, which includes quite a few new features and improvements. Microsoft announced that one of the biggest additions is support for procedural geometry. Microsoft calls its version "Node Geometry," and it should help create complex game worlds without the need for creating huge files for a game's 3D art. Microsoft says this feature allows a local PC to make that content. People who try to download and play a web-based game with Node Geometry support can just "download a few KBs worth of Node Geometry data and allow their own machine to create the geometry." Obviously, this means faster loading of a complex web-based game along with improved overall performance. Another feature in Babylon.js 7.0 is already a common feature in standard PC and console games. The new version adds support for ragdoll physics for body animations. Microsoft says this feature will let "any skeletal rigged asset to collapse and flop around with limp lifelessness at the push of a button." Yet another improvement in the tools should allow for better lighting and shadow effects in web-based games. Babylon.js 7.0 now supports Global Illumination for web games. https://doc.babylonjs.com/whats-new After Effects 24.3 After Effects 24.3 sees per-character styling, introduced in beta builds of the software last year, move into the stable release. Users can now vary the font, size, color and styling of text layers on a per-character basis, via scripting, using the TextDocument DOM’s new CharacterRange and ParagraphRange objects. You can find more details, plus a link to an example script, in the original beta announcement. Other changes include a new paragraphCount attribute, which returns the number of paragraphs in a TextDocument. Outside of text styling, there are no other new features in the release, although there are a number of bugfixes After Effects 24.3 is available for Windows 10+ and macOS 12.0+ on a rental-only basis. Single-app After Effects subscriptions cost $34.49/month or $263.88/year. In the online documentation, the update is also referred to as the March 2024 release or After Effects 2024.3. https://helpx.adobe.com/after-effects/using/whats-new/2024-3.html 3ds Max 2025 3ds Max’s rigging and animation toolsets don’t get any major new features, but there are fixes for “long-standing issues” in CAT, 3ds Max’s Character Animation Toolkit plugin. In particular, CAT should be more stable when loading or saving data, deleting CAT objects or using the Layer Manager, and copy/pasting layers should work correctly with Extraction mode. The OpenColorIO (OCIO) color management system introduced in 3ds Max 2024 as a tech preview now becomes the default mode for color management in new scenes. In addition, the VertexPaint tool is now color-managed, and Baking to Texture lets users choose an output color space. Workflow improvements include a new Menu Editor for customizing 3ds Max’s menus, including quad menus. With it, users can add, remove, reorder or rename menu items, or create new separators and sub-menus; and custom configurations will be compatible with future versions of 3ds Max. The global search system has also been overhauled, with an expanded list of commands appearing as the user types, as shown above. Search results now show the last five commands used, making it easier to repeat actions; and it is now possible to double-click in the search results to launch actions. The search bar can also be resized, or made into a dockable window. Outside the core application, 3ds Max’s USD plugin has been updated, with USD for 3ds Max 0.7 making it possible to move USD prims by selecting them directly in the viewport. Users can also now import USD camera and light animations, and import USD blendshapes as Morphers, blendshape export having been added in USD for 3ds Max 0.5 last year. 3ds Max’s Multi/Sub-Object Material is now supported when exporting USD files. 3ds Max’s integration plugin for Autodesk’s Arnold renderer has also been updated, with MAXtoA 5.7.0.0 supporting the new features from Arnold 7.2.5 and 7.3. It’s a sizeable change, rewriting a “large part” of Arnold’s GPU render engine to use OptiX 8, making start-up “significantly” faster, and improving performance scaling on multiple GPUs. The GPU engine also now supports multiple render sessions. The update also introduces support for dithered samples in progressive and adaptive rendering, generating “nicer noise distributions” at lower AA sample counts. Other changes include support for global light sampling in volumes, support for toon light group AOVs, and a new Overlay imager for printing text over rendered images. 3ds Max 2025 is compatible with Windows 10+. It is rental-only. Subscriptions cost $235/month or $1,875/year. In many countries, artists earning under $100,000/year and working on projects valued at under $100,000/year qualify for Indie subscriptions, which now cost $305/year. https://makeanything.autodesk.com/3dsmax Maya 2025 and Maya Creative 2025 The biggest change to the 3D modeling toolset in Maya 2025 is that the Smart Extrude system has been ported in from sibling application 3ds Max. First introduced in 3ds Max 2021.2 and updated steadily over the next three years, Smart Extrude is intended to provide a more intuitive way to extrude geometry than existing tools. It performs a combination of extrusion, cutting, merging and unioning as a single operation, making it possible to block out complex shapes quickly. Users can extrude faces interatively in the viewport, with Maya automatically rebuilding and stitching faces that have been cut through, reducing the need for manual clean-up. Maya’s existing modeling tools have also been updated, with the Bevel node now able to filter input edges. Rather than applying a bevel operation to an entire mesh, the Poly Bevel Filter makes it possible to apply it only to selected edges, hard edges, or edges above an angle threshold. When applying a bevel to a mesh that was created as a result of a Boolean operation, by default, the bevel now only affects the edges created at the boolean intersection. If the Boolean input meshes are modified, the bevel updates automatically, as shown above. In addition, Extrude Edge now automatically creates UVs for the newly created edges or faces. The new UVs are offset from the original UVs in UV space, requiring less manual clean-up. Character riggers get a new Deformation widget in the Attribute Editor, for viewing the deformers and topology modifiers affecting an object without having to use the Node Graph. It provides quick access to basic information about each deformer, like vertex counts and component types, and makes it possible to reorder, disable or reactivate deformers. Updates to existing rigging features include the option to use the Proximity Wrap deformer as a proxNet deformer, to make a deforming driver geometry apply its influence relative to another deformed version of the driver, rather than its original undeformed shape. The workflow is probably easier to understand if you look at the example provided in the online documentation, which shows Proximity Wrap deformers being used to rig an arm. The Bake Deformer Tool – used for baking down complex rigs for export to game engines and other apps with limited support for deformers – gets another update, with the new Range of Motion option making it possible to sample poses only from a specific point in an animation. Other changes include the option to adjust the text size of joint labels, and to automatically orient the secondary axis of a joint from the Orient Joint Options and Joint Tool settings. Animators get a new Motion Trail Editor, accessible from the Visualize menu. It brings together settings that were previously scattered across several different windows in the Maya interface, including the Outliner, Channel Box and Attribute Editor. Maya’s existing animation tools have also been updated, with the Dope Sheet getting a significant overhaul. The update gives it a “cleaner, better organized interface”, more like the Graph Editor, with new visual indicators for keyframe properties, and customizable colors to identify sets of keys. Workflow for interacting with keyframes has been improved, including a new Ripple Edit function for moving and scaling keys. The Graph Editor gets new keyboard shortcuts for curve sculpting. Outside the core application, LookdevX, the new toolset for creating USD shading graphs introduced in Maya 2024, gets another update, with LookdevX 1.3 adding support for MaterialX. Users can now use USD and MaterialX shading graphs simultaneously within the same Maya session, and can assign MaterialX materials directly to Maya geometry. Maya’s USD support has also been further extended, with USD for Maya 0.27, the latest version of the Universal Scene Decription plugin, making it possible to bulk load and unload prims. It is also now possible to move, rotate and scale prims directly in the viewport using Maya’s Universal Manipulator. There are also a number of smaller improvements and updates to Hydra for Maya. You can find a complete list of changes in the online release notes. Maya’s integration plugin for Autodesk’s Arnold renderer gets a significant update, with MtoA 5.4.0 adding support for MaterialX shader networks in LookdevX. Other changes include support for dithered samples in progressive and adaptive rendering, generating “nicer noise distributions” at lower AA sample counts. A “large part” of Arnold’s GPU render engine has also recently been rewritten using OptiX 8, the current version of NVIDIA’s GPU ray tracing framework, making start-up “significantly” faster, and improving performance scaling on multiple GPUs. The GPU engine also now supports multiple render sessions. Other changes include support for global light sampling in volumes, support for toon light group AOVs, and a new Overlay imager for printing text over rendered images. Maya’s Bifrost extension also gets a sizeable update, with Bifrost 2.9.0.0 adding the BOSS (Bifrost Ocean Simulation System) features from the older Bifrost Fluids toolset. Just like the original versions, they can be used to simulate wind-driven spectral waves, complete with foam, but can also be controlled via the underlying Bifrost graph. In addition, a new points_to_liquid_surface compound makes it possible to mesh point caches from Bifrost Fluids simulations, as well as sources such as MPM simulations in the graph. Maya’s Substance plugin, for editing procedural materials in Adobe’s Substance 3D format, has also been updated, with Substance 2.5.0 using the GPU by default, improving performance. In addition, the Maya Bonus Tools, Autodesk’s free collection of experimental Maya tools and scripts, is now included in the Maya installer, rather than being a separate download. Autodesk has also released Maya Creative 2025, the corresponding update to the cut-down edition of Maya aimed at smaller studios, and available on a pay-as-you-go basis. It includes most of the new features from Maya 2025, with the exception of the Bifrost update. Maya 2025 is available for Windows 10+, RHEL and Rocky Linux 8.7/9/3, and macOS 12.0+. The software is rental-only. Subscriptions cost $235/month or $1,875/year. In many countries, artists earning under $100,000/year and working on projects valued at under $100,000/year, qualify for Maya Indie subscriptions, now priced at $305/year. Maya Creative is available pay-as-you-go, with prices starting at $3/day, and a minimum spend of $300/year. https://help.autodesk.com/view/MAYAUL/2025/ENU/?guid=GUID-96FC4776-D3ED-4746-BF76-75093E165F4F Arnold 7.3 Arnold 7.3 is Autodesk’s major annual update to the renderer, giving the firm a change to make changes under the hood that break backwards compatibility. The major change in 7.3.0 is support for dithered sampling in progressive and adaptive renders, resulting in “much nicer” noise distributions at low AA sample values, as shown above. Support for the MaterialX format for rich material and look dev data has been extended, making it possible to render node graphs mixing Arnold nodes with MaterialX standard library nodes. Other changes include support for instanced lights in the Hydra render delegate, including support for visibility and matte attributes. To that, 7.3.1 adds the option to run the NVIDIA OptiX denoiser on low-resolution progressive render passes, resulting in much less noisy early previews of a scene, as shown above. The Intel Open Image Denoise (OIDN) denoiser has “10% better performance” on CPU, and users can now choose whether to run OIDN on CPU, GPU or automatic settings. Since we last wrote about Arnold, Autodesk has revamped GPU rendering, reworking a “large part” of the GPU renderer using OptiX 8, the new version of NVIDIA’s GPU ray tracing framework. The changes, made in Arnold 7.2.5, improve performance, particularly time to first pixel in the first few renders following an Arnold upgrade, due to reduced cold cache pre-population time. Performance also scales better to multiple GPUs, with the speed-up from a second GPU when rendering the standard Robot Soldier test scene increasing from 1.1x to 1.7x. New features include an Overlay imager for layering text onto rendered images. Other changes include support for global light sampling in volumes, support for secondary paths like reflections in the new distance shader, and support for toon light-group AOVs. In addition, it is now possible to import USDZ files and render them using the USD procedural. All of Arnold’s integration plugins have been updated to support the new features: 3ds Max: MAXtoA 5.7.0 and MAXtoA 5.7.1 Cinema 4D: C4DtoA 4.7.1 Houdini: HtoA 6.3.1 Katana: KtoA 4.3.1 Maya: MtoA 5.4.0 and MtoA 5.4.1 Arnold 7.3.0 and Arnold 7.3.1 are available for Windows 10+, RHEL/CentOS 7+ Linux and macOS 10.13+. Integrations are available for 3ds Max, Cinema 4D, Houdini, Katana and Maya. GPU rendering is supported on Windows and Linux only, and requires a compatible Nvidia GPU. The software is rental-only, with single-user subscriptions costing $50/month or $400/year. https://help.autodesk.com/view/ARNOL/ENU/?guid=arnold_user_guide_ac_release_notes_ac_rn_7x_html MotionBuilder 2025 MotionBuilder 2025 is the largest update to the software for some years, if only because it makes MotionBuilder the latest application to support Pixar’s Universal Scene Description. The new USD for MotionBuilder plugin makes it possible to load a USD stage, including textures and lights, and display it in the viewport along with MotionBuilder data. The workflow should make it possible for artists working on VFX or animation productions with USD-based pipelines to animate in context. Technical artists can interact with the USD stage via Python scripting. There are still some key limitations, the biggest being that it isn’t actually possible to convert USD data to MotionBuilder data, or vice versa: the USD stage is purely there for reference. In addition, loading large or complex stages “may cause crashes”, so users are advised to limit visibility of unnecessary objects before importing into MotionBuilder. The other changes in MotionBuilder 2025 are smaller workflow and performance improvements. They include the option to stretch multiple selected clips in Loop mode in the Story Tool, and to lock Global/Take time marks on the timeline, preventing them from being moved. There are also new shorcut keys for common tasks, updates to Python scripting, and updates to the way that MotionBuilder displays warning messages. MotionBuilder 2025 is available for Windows 10+ and RHEL and Rocky Linux 8.7/9.3. The software is rental-only, with subscriptions costing $2,145/year. https://help.autodesk.com/view/MOBPRO/2025/ENU/?guid=Whats-New-MotionBuilder-2025
-
Q: Where you prepared for this article and done all the screenshot steps beforehand or did you simply dismantle the scene after they picked your entry ?
-
What version of C4D and what method did you use to model that part ? Usually the Thicken generator messes things up. Usually the modifier capsule solves that for me.
-
What's your advice ? I'm urging the woman to get up to date with current technologies directly relevant to her field. I'm also urging her to get more productive faster and easier in order to maintain or improve her client list, income and quality before it's too late and be herself a "relic" striving to adapt in a highly competitive environment. Cheap clients need fast and high-quality results, expensive clients have exactly the same needs the difference between the two before the AI wave was the artists themselves. If , as an artist, had a some self respect you had the upper hand on closing the right deal with the right client for the right price, now all clients know how what it takes for their "vision" to be produced and they will not back down on their budget to get what they want as cheaply as they can until they find the right person. They have the upper hand now. All of course is relevant on what their "vision" is. So knowing about the current AI capabilities is a card to use as a professional in order to convince your client what solution is best and for what reasons.
-
I totally understand her frustration. One thing I can advice is to just use what she hates to her advantage. Use AI to get good to it. Try different models (stable diffusion, firefly, canvas, midjurney, dall-e, runway), learn inpainting and outpainting, learn the mechanics of how models use prompts/negative prompts and so on. Then use AI generated content to inspire new derivative works from it either by designing complete works with non-AI tools or by combining them with AI tools. Like using photoshop to correct AI mistakes, enhance or limit details, add or subtract elements. Or even use her own sketches/drafts to elevate her own ideas. It's not the end of the world as long as there are clients. Most artists found it hard to find their way in society as clients always saw them as machines spewing content way before the AI wave. If she is the "other" kind of artist like sculptor, textile maker etc... you know, more tactile artist then she can definitely take advantage of AI to bring the illustrations to the real world. She might also find more free time to use it for her own personal development like reading more books, or finding one more artistic field to develop like photography. Any action will help her get out of the AI depression as long as the action makes her feel she's making progress.
-
Sorry I apologize forgive me.mp4
-
Tap Q to Output Geometry [Scene Nodes Quicktip]
HappyPolygon replied to Donovan K MXN's topic in Cinema 4D
Know what's even better ? An Auto-Demonstrate Node option button on the left-side palete like the Enable Snap option button or the Viewport Solo Automatic option. Does exactly the same thing with only one click. Good to have you around Donovan. -
Is there a way to recreate this shader in C4D or RS ?
HappyPolygon replied to HappyPolygon's topic in Cinema 4D
Found this but don't know if it's OSL -
Is there a way to recreate this shader in C4D or RS ?
HappyPolygon replied to HappyPolygon's topic in Cinema 4D
He is using vertex map to recreate it... The effect was revealed back when Fields were introduced. I avoid vertex maps for these kind of things because they depend on polygon dencity making the effect very computationally heavy... that's why I'm looking for a shader solution -
News from the latest developments in AI 3D modelling techniques
HappyPolygon replied to HappyPolygon's topic in News
AI News Broadcast... https://www.channel1.ai/ -
I saw the following video. The whole trick seems to be dependent that Feedback node. Nodes in C4D avoid this kind of cyclic connections... Is there an other way ?
-
Growing Points with Cinema 4D Scene Nodes and a Volume-Builder
HappyPolygon replied to HappyPolygon's topic in Cinema 4D
-
News from the latest developments in AI 3D modelling techniques
HappyPolygon replied to HappyPolygon's topic in News
I have feeling that we will see the emergence of TV/web channels with completely AI generated content. Fake ads, fake news, fake movies, fake morning and night shows, even fake reality shows and games... Who's gonna watch them ? Any couch potato or anyone bored by regular tv. -
News from the latest developments in AI 3D modelling techniques
HappyPolygon replied to HappyPolygon's topic in News
Yeah, it actually happened... I thought we had more time. I don't even see any application in the stock footage market. We are talking at opportunities of being an one-man-army at movie production. Why search a stock footage website and pay for it when you can just type exactly what you want in Sora and pay to get. No one will need neither hand actors, actors or voice actors for their advertisements... It may still be impossible for the AI to produce the actual new product (like cars, juice boxes and detergents) in the scenes but it's easy to replace it in the classic AE with the classic rendered video with only 3D model... no need for simulations, scene modeling or light direction. only 30 grams of carbon in the atmosphere per video... -
News from the latest developments in AI 3D modelling techniques
HappyPolygon replied to HappyPolygon's topic in News
- 30 replies
-
-1
-
one more weird thing I found in SceneNodes ... seems like a UI mishap
-
Dominik Ruckli returns with one more interesting tutorial on scenenodes creating a highly demanded feature in C4D: Growth Geometry Although it still doesn't reach the level or customization of Houdini I admire it for the level of complexity and final quality. Previous attempts making something like this lead to a dead-end due o a fail-safe built into C4D preventing cyclic reference on certain generators, before finding a useful script generating new particles on constant radius from previous particles... that proved slow on computation but Dominik Ruckli's setup is faster.
-
I noticed something weird in the Orientation function in cylinders. I could be on every primitive though... Notice how the edges allign in the left image when the cylinders are positioned using the common rotate tool/coordinates. But on the right image they don't. This is prevalent only on an odd number of rotation edges which leads me to believe that using the automatic orientation function (here -Z) it also flips it on an other axis. Maybe this is linked to the Line Capsule who's also been mentioned a "bug" with the orientation although it was noted that it wasn't a bug but users where still not satisfied... orientation.c4d
-
Modeling connecting pipes without subdivision surface
HappyPolygon replied to Pistol Pete's topic in Cinema 4D
I use this method ... pipes.c4d -
That was a hidden and obscured function ... I also was wondering why this generator did not have any spline options... Dropping any kind of spline in that field to get the result of the generator doesn't look or sound like C4D... it's unlike anything else in c4d