-
Posts
1,898 -
Joined
-
Last visited
-
Days Won
94
Content Type
Profiles
Blogs
Forums
Gallery
Pipeline Tools
3D Wiki
Plugin List
Store
Downloads
Everything posted by HappyPolygon
-
It doesn't want to comply, every time I try to connect it to the output Geometry Port of the Node Modifier it creates a new Geometry Op connecting the Op output I think I finally made it right by editing the output ports' properties...
-
What node is the opposite of the Geometry OP ? I want to pass an Op port to a Geometry port.
-
Someone during summer had posted a melody video like the one bellow but I couldn't find his post so I'm making a new one to share my thoughts. music.mp4 This is definitely not keyframed. I believe the animation is purely procedural and once you have the right setup it's just modeling and rendering from that point on. I also believe at 95% that it's not possible using C4D. The setup: A simple ball with bullet dynamics. Make an "XPresso" to calculate a direction normal. This is a vector with a constant length that points towards the direction the ball moves. Have a Null at the end of that vector. Make an "XPresso" particle emitter as a child of the Null. Connect a Sound node to the emitter so it emits only one particle without speed when a certain threshold of the sound frequency or volume is reached. Make a simple Plain be the particles. Have it as a Collider. Now set the ball to have an initial force direction to make a more interesting start. With this setup there is no way the ball will ever miss an obstacle making it look like it hits notes everywhere at the right moment. Adjust the length of the vector so the collider generates always a bit further than the ball. After the music ends, just bake all generated geometry and animation. From now on you just have to model the environment and substitute the obstacles with something fancy. You can animate the obstacles to make them look like they interacting with the ball (move, change color etc) if you place them as instances at the center of each plane. With a Time Effector and a simple Spherical Field at the center of the ball is a piece of cake. Why I think it's impossible with C4D (no Python) I've tried to use the PStorm with the Sound node... Sound Node seems useless. I don't know if you can construct a "direction vector" I'd like to hear your thoughts, especially from people using Houdini. Could my assumption work ? And if anyone can overcome the XPresso limitations I'd also like to know how.
-
USD for 3ds Max 0.5 Autodesk has released USD for 3ds Max 0.5, the latest version of its USD integration plugin for 3ds Max, the firm’s 3D modelling and rendering software. The update adds support for the MaterialX standard for rich material and look dev data, and for exporting facial animations created using 3ds Max’s Morpher modifier as blendshapes. First released in 2021 and officially still in beta, USD for 3ds Max enables users to import and export data from 3ds Max in Universal Scene Description (OpenUSD) format. The plugin can now import and export meshes, materials, cameras, lights and animation data, and supports the .usda, .usdc and .usdz file formats. To that, USD for 3ds Max 0.5 adds support for MaterialX, the ILM-developed open standard for rich material and look dev data, now increasingly being adopted in other DCC applications. The update makes it possible to import or export MaterialX materials in .mtlx format, and to edit MaterialX shader graphs in the Compact Material Editor or Slate Material Editor. The online release notes are very brief, but Scanline VFX TD Changsoo Eun has an excellent post on his website that describes the workflow in detail. In addition, the USD Exporter gets support for 3ds Max’s Morpher modifier. That makes it possible to export morph-based facial animations to USD, by exporting Morpher channels as USDSkel blend shapes. USD for 3ds Max is compatible with 3ds Max 2022+. It is a free beta for registered 3ds Max users. 3ds Max itself is available for Windows 10+. It is rental-only. Subscriptions cost $235/month or $1,875/year. In many countries, artists earning under $100,000/year and working on projects valued at under $100,000/year qualify for Indie subscriptions, which now cost $305/year. https://help.autodesk.com/view/3DSMAX/2024/ENU/?guid=GUID-8B179650-9521-4DC3-B983-EE5A3AF203C6 Animate Anything Anything World has launched Animate Anything, a new AI-driven online service for automatically rigging and animating low-poly 3D models. It supports humanoids, animals, and vehicles, and the rigged models can be downloaded in FBX, glTF/GLB and Collada format, for use in DCC applications and game engines. The service can be used for free on three models per month. Founded in 2018 by TD and former web developer Gordon Midwood, Anything World is an AI-driven platform for building and deploying real-time applications. It is mainly used for creating mobile and web apps and AR experiences, with showcase projects including work with Atlantic Records, Ministry of Sound and Ubisoft. It’s WebGL-based, so it works in any modern web browser, and comes with SDKs for integrating the platform with Unity and Unreal Engine. Animate Anything itself is a standalone service, so you don’t need to be using the rest of the platform: you can just upload the models you want to have rigged and download the results. It works on quite a range of model types: as well as humanoid characters, it supports creatures, including quadrupeds, insects, fish and birds; and vehicles including cars, trucks and bikes. File formats supported include FBX, glTF/GLB and Collada, so the rigged models can be used in pretty much any DCC application: the website specifically namechecks Blender and Maya. At the minute, Animate Anything is geared towards assets suitable for online or AR experiences: while you can upload models over 20,000 vertices, they are decimated for processing. The rigs generated have bones and skin weights, but don’t currently have rig curves. In addition, the platform currently doesn’t support props or submeshes like layered clothing, and it doesn’t generate finger bones or blendshapes for facial animation. Anything World tells us that it hopes to ease these restrictions in future. Anything World’s Animate Anything service runs in a standard web browser. It operates on a credit basis, with each model processed consuming one credit. Free Individual accounts come with three credits free per month, and users can buy packs of extra credits, which cost $10 for 10, $25 for 50, or $100 for $250. Paid Micro and Pro accounts for Anything World itself cost $50/month and $250/month. https://anything.world/tech ZBrush for iPad Maxon has announced ZBrush for iPad, a new tablet edition of the digital sculpting software. The iPad edition, which is due for release in 2024, was announced at ZBrush Summit 2023. Although ZBrush has been a standard tool for digital sculpting in concept design, VFX, game art and motion graphics for two decades, it has always been difficult to use on the move. While it’s possible to run the desktop version on a mobile workstation, there is currently no tablet edition, driving artists to other iPad digital sculpting apps, including Maxon’s own Forger. ZBrush for iPad should change that, making the software available for iOS devices. It’s due out in 2024, but that’s currently all of the information we have: Maxon hasn’t announced how the feature set will differ from the desktop version, or whether subscribers will get it for free. The firm is currently calling for beta testers: you can sign up via the link at the foot of the story. ZBrush for iPad is due in 2024. Maxon hasn’t announced an exact release date, pricing or system requirements. https://www.maxon.net/en/zbrushforipad Plasticity 1.3 Plasticity 1.3 goes even further in supporting Blender users, since the update introduces the Blender Bridge: a new add-on capable of live-linking the two applications. It is intended to speed up look development, making it possible to preview models with proper materials and lighting inside Blender, while continuing to work on them live. It is also possible to edit the geometry in Blender: “almost all” of Plasticity’s faceting tools are available within Blender, making it possible to adjust mesh density or edge flow. Other new features in Plasticity 1.3 include a Chordal option for fillets. Chordal fillets maintain a constant length for the chord of the fillet, regardless of the angles being filleted. You can see a demonstration in the video above. There are also some significant workflow improvements, including a new system of customisable radial menus, providing quick access to key settings. Two are provided by default, one for toggling selection modes, and one for modifying 3D viewport settings. In addition, the new Isolate command makes it possible to solo selected objects, hiding the other objects in a scene to streamline editing. It is also now possible to open multiple Plasticity windows when kitbashing, and copy and paste between them, with the option to copy the exact placement of an object. Plasticity 1.3 features a lot of smaller feature and workflow updates, including STL import. Plasticity 1.3 is available for Windows 10+, Ubuntu 22.04+ Linux and macOS 12.0+. It runs on both Intel and Apple Silicon Macs. The update is free to existing users. Perpetual node-locked Indie licences cost $149, up $50 from the previous release, and import and export files in OBJ, STEP and Parasolid XT format. Studio licences support a wider range of file formats and cost $299. There is also a free trial edition, which only imports STEP files and exports OBJ files. https://doc.plasticity.xyz/whats-new Project Stardust Project Stardust combines a number of AI features, including object identification, auto-masking, color correction and generative synthesis of new content, The demo shows a user selecting objects within an image by clicking on them, with Project Stardust automatically identifying the boundaries of the object and masking it. The user can then reposition the object simply by dragging it around; delete it, with Project Stardust automatically synthesising the missing background; or replace it, by entering a text prompt describing a new object to insert in its place. A lot of this functionality will be familiar to Photoshop users, who already have access to AI-based object selection and object deletion – and, as of Photoshop 25.0, new text-to-image capabilities based on Firefly, Adobe’s generative AI toolset. However, there are some interesting new options, like a ‘Remove distractors’ command, shown automatically identifying and removing background characters from a shot. Adobe hasn’t said which applications the new image-editing engine is designed for, but the format of the video suggests that it is intended for mobile apps. There are also currently no details on when the technology is likely to be publicly available in Adobe software. https://labs.adobe.com/projects/stardust/ Twinmotion 2023.2 Users can now import animations in FBX or glTF/GLB format for rendering inside Twinmotion, previewing the results using the video controls or the scrubber. Both static and skeletal meshes are supported, so you can import 3D characters, but not currently blendshape-based facial animations, or animated cameras and lights. Preview 2 also updates Twinmotion’s Paint and Scatter tools, used to dress scenes with instanced objects. It is now possible to scatter any simple static mesh from the Twinmotion library, including meshes from Quixel Megascans and Sketchfab. There are also improvements to water materials, which now correctly render path traced transparency; and to the snow and rain effects, which now respond better to wind speed and direction. Epic Games has also replaced the old non-commercial free trial of Twinmotion, which capped exports at 2K resolution, but which was otherwise fully featured, with a new Community Edition. It’s still free, but as well as limiting resolution, it watermarks output, and lacks access to online collaboration system Twinmotion Cloud. Twinmotion 2023.2 Preview 1 and 2 are only available in the Community Edition, which Epic Games says will now be the policy for all future preview releases. Twinmotion 2023.2 is available as a public preview for Windows 10+ and macOS 12.0+. Integration plugins are available for CAD and DCC apps including 3ds Max 2016+, SketchUp Pro 2019+ and Unreal Engine 4.27+. Epic hasn’t announced a final release date. New licences cost $499. The software is free for students and educators, and there is also a free Community Edition of the software which caps export resolution at 2K, watermarks output, and lacks access to Twinmotion Cloud. https://www.twinmotion.com/en-US/docs/2023.1/twinmotion-release-notes/twinmotion-2023-2-PR2-release-notes Gaea 2.0 According to QuadSpinner, Gaea 2 represents a major rewrite of the software’s core that will improve overall performance by “several magnitudes”. As well as optimising the code, some processes have been GPU-enabled, with CPU fallbacks. The node graph has been redesigned to reduce the the learning curve for new users, with new node modifiers to make it possible to generate terrain with lighter graphs. Users will also be able to create their own custom tools with macros and scripting. The release also rethinks terrain design workflow to “address the biggest problems artists face in achieving control over procedural shapes”. The ‘primitive + lookdev’ workflow from Gaea 1 has been superseded with what QuadSpinner describes as ‘primitive + landscapes + surfaces’. Gaea 2.0’s new Surface nodes apply smaller-scale erosion effects to parts of a terrain after it has been generated, making terrain design a multi-stage, multi-scale process. This multi-scale design philosophy extends to very large environments, with a new God Mode making it possible to modify parts of worlds and have their surroundings react to the changes. According to QuadSpinner, the new workflow makes it possible to manage biomes within a world independently, while still having them share procedural characteristics. Artists work on an infinite base terrain using a multi-resolution workflow, with the option to export terrain as hybrid-resolution meshes “friendly” to Nanite, Unreal Engine 5’s virtual geometry system. For teams, the system supports simultaneous multi-user editing of different parts of a world, with source control for the individual inputs. Other new features include the Erosion 2 algorithm, described as being capable of “shapes previously unavailable” in Gaea, and as preserving the original character of the terrain better. It is provided in parallel to the existing Erosion 1 algorithm, although it’s much faster, with speed boosts of “up to 10x”. You can see QuadSpinner’s performance comparisions for different types of terrain above. In addition, Gaea 2.0 wil come with a “fully fledged” bridge to Unreal Engine 5. As well as simply exporting terrain to the game engine – which can be done as a height field or 3D mesh – and having the the bridge set up materials, it will be possible to edit terrain in UE5. Users can select properties from the Gaea project to remain editable inside Unreal Engine, or use a new set of Unreal Engine Gaea Terrain Modifiers to make additional changes. Gaea 2 will not be backwards-compatible, but exiting users get a “big discount” on upgrades, and can continue to use their Gaea 1 licence in parallel to Gaea 2, even after upgrading. Gaea 2.0 will be a Windows-only application. It is due for release in “late Q1 2024′, with pre-orders due to begin on 24 November 2023. The price remains unchanged from Gaea 1.x. There is a free Community Edition, which is licensed for commercial use and provides access to most of the key tools, but which caps export resolution at 1K. The Indie edition, which caps resolution at 4K, costs $99; the Professional and Enterprise editions, which provide 8K resolution plus a range of advanced features, cost $199 and $299. https://quadspinner.com/gaea2/ TopoGun 3 On its release in the late 2000s, TopoGun became one of the first tools for retopologising high-res meshes, enabling artists to turn sculpts into lower-res, more animation-friendly assets. Although still widely used, the software seemed to enter a period of dormancy after the release of TopoGun 2 in 2012, with little public activity until the beta release of TopoGun 3 in 2020. Three years on, TopoGun 3 has finally moved beyond beta, and was officially released yesterday Workflow in TopoGun begins by importing a high-resolution reference mesh to retopologize: typically a sculpt or a 3D scan. In TopoGun 3, it can be imported in FBX as well as OBJ format. Users can plan out the new topology they want to create using the Guide Lines tool to draw strokes on the reference mesh to establish overall edge flow. The new geometry can then be drawn out directly over the surface of the reference mesh, and adjusted using a range of editing tools, either a few vertices at a time, or by using the pressure-sensitive Brush tool to move or relax larger selections of vertices. It is possible to restrict edits to certain parts of the mesh by using the Mask tool to paint masks. New tools in TopoGun 3 include Slice and Cut, for cutting into mesh geometry, and Circle and Shell, the latter of which can be used to generate clothing from the mesh surface. The Tubes tool, for retopologizing cylindrical parts of the mesh like limbs, has been rewritten. Although TopoGun is geared towards manual retopology work, it also supports automatic retopology for parts of mesh where precise vertex placement is less critical, such as ears. TopoGun 3 makes it possible to combine manual and automated workflows in new ways, thanks to the Patch tool: one of the most interesting new features in the release. It enables users to sketch out new topology for the major regions of a sculpt following anatomical landmarks, as shown in the image above, and have TopoGun generate new topology within them automatically. Patches can then be joined to create a continuous new low-poly surface for the entire sculpt. Other key features in TopoGun include Subdivision, which subdivides the low-res retopologized mesh to preserve fine details from the reference mesh. It is also possible to transfer detail to the retopologized mesh via texture maps baked from the high-res reference mesh: TopoGun 3 can generate normal, displacement, ambient occlusion, color, curvature, transmission and cavity maps. Both subdivision and baking are highly multithreaded – TopoGun 3 supports up to 256 CPU cores – and baking ambient occlusion maps is now GPU-accelerated via OpenGL. TopoGun 3 is compatible with Windows 7+ and macOS 10.13+. Now that the new version is out of beta, node-locked licences cost $149.99, while floating licences cost $349.99. Upgrades from TopoGun 2 cost $29.99 and $74.99, respectively. http://topogun.com/ Arnold 7.2.4 Compared to other recent versions of Arnold, Arnold 7.2.4 is a smaller update, and is described in the documentation as a minor feature release. The toon shader gets better detection of internal edges, resulting in previously missed outlines being rendered correctly, as shown in the image above. Performance improvements include better sampling of mesh lights, with render times on scenes with multiple mesh lights reduced by “up to 5%”. Interactive rendering has also been improved, although the release notes don’t put a figure on the performance boost. Pipeline changes include support for OCIO color space aliases in the OCIO color manager, and the removal of restrictions on node naming conventions in custom MaterialX node definitions. All of Arnold’s integration plugins have been updated to support the new features: 3ds Max: MAXtoA 5.6.5 Cinema 4D: C4DtoA 4.6.6 Houdini: HtoA 6.2.4.0 Katana: KtoA 4.2.4.0 Maya: MtoA 5.3.4 Arnold 7.2.4 is available for Windows 10+, RHEL/CentOS 7+ Linux and macOS 10.13+. Integrations are available for 3ds Max, Cinema 4D, Houdini, Katana and Maya. GPU rendering is supported on Windows and Linux only, and requires a compatible Nvidia GPU. The software is rental-only, with single-user subscriptions costing $50/month or $400/year. https://help.autodesk.com/view/ARNOL/ENU/?guid=arnold_core_7240_html Cell Fluids CG artist and programmer Shahzod Boyhonov (specoolar) has released Cell Fluids, a lightweight fluid solver based on Blender’s Geometry Nodes system. It’s a grid-based solver and only generates displacement of the water surface, so it isn’t suited to complex simulations, but it’s described as being fast enough to work interactively in “semi-realtime” inside the open-source 3D software. Although Blender 3.6, released earlier this year, added support for particle simulation to the Geometry Nodes toolset, Cell Fluids isn’t actually particle-based. Instead, it’s a grid-based solver that generates displacement to the fluid surface, so it’s intended for mimicking bodies of ground water like rivers and oceans – not, say, liquids pouring out of containers – and it can’t reproduce effects like spray, or interactions with deforming objects. However, it’s compatible with both Blender’s Cycles and Eevee renderers, and is described as being fast enough to work interactively, with the simulation being calculated in “semi-realtime”. The results can be baked to a static mesh with flow map textures, for final-quality rendering offline. Cell Fluids is compatible with Blender 3.6+. It costs $20. https://blendermarket.com/products/cell-fluids After Effects 24.0 The biggest change in After Effects 24.0 is Next-Gen Roto Brush, an improved version of the Roto Brush, After Effects’ rotoscoping tool. It enables users to create roto masks by roughly painting out the part of the frame to be isolated, with the software automatically generating the full mask and propagating it through the other frames of the video being rotoscoped. On paper, the new version isn’t as big an update as 2020’s Roto Brush 2.0, which applied machine learning to generate masks faster and more accurately. However, Adobe still describes it as a “major advance”, improving speed, quality of the masks generated, and stability, particularly in shots where the object being masked is occluded by foreground objects, and on hard-to-track footage like hair and transparencies. The update also extends support for movie industry color management standard OpenColorIO (OCIO), introduced earlier this year in After Effects 23.2. After Effects now has the OCIO Look Transform and OCIO CDL Transform as effects under the Color Correction category. Other changes include a new set of scripting hooks for text and font manipulation, intended to make it possible to match the features available in the After Effects UI via scripting. In addition, encoding and decoding of video footage using the HEVC (H.265) and H.264 codecs is now hardware-accelerated on Intel’s discrete Intel Arc GPUs. GPU-accelerated decoding of R3D (RED RAW) footage – previously only available on macOS – is now also available on Windows, on AMD and NVIDIA GPUs with at least 6GB of GPU memory. online documentation, the update is also referred to as the October 2023 release. Subscriptions to After Effects cost $31.49/month or $239.88/year, while All Apps subscriptions, which provide access to over 20 of Adobe’s creative tools, cost $82.49/month or $599.88/year. https://helpx.adobe.com/after-effects/using/whats-new/2024.html Image 2 AI model in Firefly Adobe’s second-generation Image 2 AI model improves the quality and detail of the images generated, improving prompt coherence, and making it possible to generate 4K images. It also powers one of the key new features in Text to Image: Generative Match. The option, which is available in beta, makes it possible to control the content of a generated image with text prompts, but to use a source image to guide its visual style. Users can choose from a set of readymade style references, or upload their own, making it possible to generate images matching the style of a current project – or, in the case of illustrators and concept artists, to generate images mimicking their own personal art style. The demos shown during Adobe MAX showed a range of different reference images being used to guide the look of a generated illustration, from stylized to photorealistic, and even an actual photo. Other new Image 2-powered features in Text to Image include Photo Settings, which makes it possible to adjust generated photos using a set of controls mimicking physical photography, including Aperture, Shutter Speed and Field of View. It is also now possible to have Text to Image suggest text prompts for you that are likely to give better results, and to use negative prompts to specify content to exclude from images. Image Mode 2 is available in the web version of Firefly, which supports supports the Chrome, Safari and Edge browsers on desktop. Pricing is credit-based, with fast’ generative credits available as part of subscriptions to Adobe’s Creative Cloud tools, and via separate subscriptions. https://blog.adobe.com/en/publish/2023/10/10/future-is-firefly-adobe-max DeepMotion Animate 3D 5.0 The main new feature in Animate 3D 5.0 is multi-person tracking, making it possible to track multiple actors within a single source video. Depending on their subscription tier, users can track between two and eight people simultaneously, including full-body, hand and facial tracking. It’s possible to select only those actors you want to track from the source footage, and to assign separate 3D characters to each. However, the system is still officially in beta, and has a number of limitations: notably, that actors can’t interact, and should not touch in the footage. Other changes include ‘full support’ for single-person capture on mobile devices, making it possible to use Animate 3D within mobile as well as desktop web browsers. The online Animate 3D Portal has also been updated, reorganising the Character Library and adding a new search system. Users can also now add background environments when exporting videos from animations in MP4 format, and export images of character poses with transparent backgrounds. 3D avatars generated using Avaturn, support for which was introduced in Animate 3D 4.2, now support Animate 3D’s face tracking. Animate 3D should run in any standard desktop or mobile web browser. Usage is priced on a credit basis, with one credit corresponding to one second of full-body animation. Face and hand tracking cost 0.5 credit per second extra. Free accounts now get 60 credits per month, and can process video clips up to 20 seconds long, at up to 1080p resolution, and up to 30fps. They are limited to non-commercial use. Paid accounts now cost between $15/month and $300/month, or $108/year and $996/year, with higher tiers supporting higher-resolution and higher-frame-rate clips, and advanced features like motion smoothing. https://www.deepmotion.com/post/multi-person-tracking Open 3D Engine 23.10 First announced in 2021, O3DE is an open-source, cross-platform “AAA-capable” game engine” pitched as a successor to Lumberyard, AWS’s free engine. It features a modular, SDK-like design, open-source build system and new networking stack, and includes hardware-accelerated ray tracing renderer Atom. The engine is the first release of the new Linux-Foundation-backed Open 3D Foundation: a counterpart to VFX technology body the Academy Software Foundation for the game development industry. O3DE 23.10 features number of graphics improvements, with Atom getting support for ray traced reflections and mesh instancing. Performance improvements include better memory allocation in DirectX and Vulkan. The update also lays the work for future changes, with a new framework for multi-GPU support, and the “start” of mobile support for iOS and Android. Developers get an experimental new Document Property Editor, intended to enable tool creators to write editors without having to “dive into the complexities inherent in Qt-based UI, element sorting, or user-driven filtering”. Script Canvas, O3DE’s visual programming system, gets new ‘compact nodes’ intended to handle simple operations while taking up less visual space in the node graph. There are also a number of workflow improvements when installing other users’ content created in O3DE, and when exporting projects to Windows, Linux and iOS. Outside game development, there are updates to the ROS2 Gem added in O3DE 23.05, which integrates the Robot Operating System, a set of open libraries for creating robot sims. Changes include a new contact sensor, performance improvements to the camera and LIDAR sensors, and support for finger and vacuum grippers. New user-created add-ons include RenderJoy 2.0, a clone of shader-authoring system ShaderToy for O3DE, and a volumetric cloudscapes Gem. Both were created by developer and former AWS graphics software engineer Galib Arrieta. Compiled binaries of Open 3D Engine 23.10 are available as free downloads for Windows 10 and Ubuntu 20.04 Linux. The source code is available under an Apache 2.0 licence. https://www.docs.o3de.org/docs/release-notes/2310-0-release-notes/ Nuke 15.0, NukeX 15.0, Nuke Studio 15.0 Nuke 15.0 features a number of updates to key pipeline technologies, including support for the current CY2023 VFX Reference Platform specification, and for USD 23.05. The release also introduces experimental support for OpenAssetIO, the Foundry-developed open standard for exchange of data between DCC and asset-management software, which was adopted by the Academy Software Foundation last year. The documentation describes the initial implementation as a “very basic tech preview” introduced to let studios begin testing OpenAssetIO in their pipelines. In addition, Nuke 15.0 introduces native support for the Apple Silicon processors in current Macs, improving general performance speeds by “up to 20%” on macOS. The change makes Nuke the latest key application in VFX pipelines to support M1 and M2 processors natively, Autodesk now having introduced support in Maya and Arnold, and SideFX having introduced support in Houdini. The 15.0 releases also extend the USD-based 3D compositing system introduced in Nuke 14.0 last year. UI and workflow updates include a dedicated 3D toolbar in the viewer, and two-tier selections – for example, to select faces within an object – for more precise control. There are also accompanying updates to the Scanline Render system and to the GeoMerge node, used to merge stacks of objects into a single scene graph. The latter gets four new merge modes, intended to give users more control over how data is merged when using the new 3D system. The underlying USD implementation now supports the USD Python bindings, making it possible to manipulate USD data directly through Python; and gets structural changes intended to improve performance, to make it easier to inspect and filter complex scenes, and to provide greater user control in future export workflows. There are also further updates to AIR, Nuke’s machine learning framework, intended to enable users to train their own neural networks to automate repetive tasks like roto. Training times when using the CopyCat node have been reduced by “up to 50%”, with key changes including the option to distribute training across multiple machines. Nuke Studio users get more complex Blink kernels available as soft effects in the editorial timeline, including Denoise, LensDistortion and blur effects. AIR’s Inference node is also available as a soft effect. Nuke 15.0 is compatible with Windows 10+, Rocky Linux 9.0 and macOS 12.0+, and supports Apple Silicon processors natively. Nuke 14.1 is compatible with Windows 10+, CentOS 7.4-7.6 Linux and macOS 12.0+. It supports Apple Silicon using Rosetta emulation. https://campaigns.foundry.com/products/nuke-family/whats-new https://learn.foundry.com/nuke/content/release_notes/15.0/nuke_15.0v1_releasenotes.html Vulkan rendering At the minute, however, the reality is rather less glamorous: the Vulkan backend is described as “highly experimental”, and has a long list of known limitations. According to the original release notes, performance is “around 20% of what we want to achieve”, but the developers aim to focus on stability and platform support first. Both the new backend and Eevee Next, a broader overhaul of Eevee, were originally scheduled for Blender 4.0, due out next month, but have now been pushed back, and are currently available in alpha builds of Blender 4.1. For Mac users, the transition from OpenGL is further along: the new Metal backend introduced in March will become the only one supported on Apple devices in Blender 4.0. The migration makes Blender one of a relatively small number of CG applications to have adopted Vulkan, along with game engines Godot and Open 3D Engine. Among other DCC tools developers, Maxon ported Cinema 4D’s viewport from OpenGL to the closed-source DirectX in 2021, having previously adopted Metal on macOS. Other applications, like Maya, already had DirectX viewport display modes. https://devtalk.blender.org/t/blender-vulkan-status-report/25706/18 GeoGen Developed as a “cooldown project” between bursts of work on real-time simulation tools EmberGen and LiquiGen, and originally known as SceneryGen, GeoGen is a real-time tool for generating heightmaps and 3D terrain. JangaFX describes it as a successor to Grand Designer, the ‘procedural universe generator’ developed by Gil Damoiseaux, now working as a R&D engineer at the company. It features an EmberGen-like interface and workflow, with users able to generate terrain using a combination of a node graph, and slider and curve controls. JangaFX pitches it as being “built like Substance [3D] Designer, but for terrain and planets”. According to its website, GeoGen is being developed with real-time applications like games in mind, and will be a “fresh take on what terrain and planet generation can be… with modern node-based workflows, unique simulations, and workflow enhancements”. JangaFX has just opened up GeoGen for alpha testing, and is currently running a raffle system on its Discord server for access to the first 50 places. The results are being announced today, so you may still just be in time to enter, but if not, the alpha will be extended to all users with JangaFX Suite licences in “mid-November”. JangaFX hasn’t announced pricing, system requirements or a final release date yet. The JangaFX Suite, which also includes EmberGen and vector filed generator VectorayGen, costs $399.99 for a perpetual node-locked licence. https://discord.com/channels/274382782627840002/369901010552094721 EmberGen 1.0.7 Publicly available since 2020, and officially released earlier this year, EmberGen is a GPU-based volumetric fluid simulator that makes it possible to create fire and smoke effects of a complexity previously only seen in offline tools in real time. The resulting data can be exported to other DCC applications in OpenVDB or Alembic format, or rendered within EmberGen as flipbook image sequences for use in game engines like Unity and Unreal Engine. The software is GPU-agnostic, and has fairly low minimum hardware requirements: a Nvidia GeForce GTX 1060 or AMD equivalent. EmberGen 1.0.7 isn’t a major update, but it does introduce a number of handy new features. The main one is probably ground plane shadow passes, with users now able to choose whether simulations cast shadows on the ground beneath. There is also “limited support” for imported FBX meshes that require blend skinning, like animated characters. Other changes include the option to match display resolution automatically to a photographic backplate, and to export relative luminance for “several capture types”. EmberGen 1.0.7 is available for Windows 10+ and Linux. Indie subscriptions, for artists earning under $1 million/year, cost $19.99/month, with users qualifying for a perpetual licence after 18 months. Indie perpetual licences cost $299.99. For studios with revenue up to $100 million/year, perpetual node-locked licences cost $1,399.99; floating licences cost $2,299.99. https://jangafx.com/software/embergen/ Mari 7.0 Foundry announced both Mari 7.0 and Katana 7.0 – the upcoming next major update to the lighting and look dev software – in parallel with the release of Nuke 15.0, the latest version of the compositing and editorial software. All three releases support common VFX industry standards, updating the three applications to the CY2023 spec for the VFX Reference Platform, and to USD 23.05, and switching to Rocky Linux as a successor to CentOS. The main change in Mari 7.0 itself is the new system for baking geometry-based texture maps, like curvature and occlusion maps. Speaking to CG Channel earlier this year, Foundry described the new Vulkan-based architecture as more performant than Mari’s current one, which is based on the technology used in Modo, its 3D modeling and rendering software. The change is intended to remove the need for users to switch to third-party applications such as Adobe’s Substance 3D tools for baking textures. Texturing content — with new Python Examples and more procedural nodes, access an additional 60 grunge maps, courtesy of Mari expert Johnny Fehr. Automatic Project Backups — with regular auto-saving, revert to any previously saved state either locally or across a network. Upgraded USD workflows — reducing pipeline friction, the USD importer is now more artist-friendly, plus Mari now supports USD 23.05. Shader updates — ensuring what’s seen in Mari is reflected in the final render, shaders for both [Chaos’s] V-Ray 6 and Autodesk’s Arnold Standard Surface have been updated. Mari 7.0 is due for release this year, although Foundry hasn’t announced an exact date. The release will be compatible with Windows and Rocky Linux 9.1. The current stable release, Mari 6.0, is compatible with Windows 10+ and CentOS/RHEL 6.0+ Linux. Mari is available subscription-only. Individual subscriptions cost $86/month, up $18/month since the release of Mari 6.0, or $689/year. Subscriptions for teams cost $1,119/year. https://www.foundry.com/products/mari OctaneRender 2023.1 New features in OctaneRender 2023.1 include five new ‘analytic primitives’ that give a “fast approximation of direct light from large direct light sources on diffuse or glossy materials”. They include a directional light, and disk, quad, sphere and tube lights, which described as “similar to mesh lights” but generating little noise at low sample values. The update also introduces support for adjustment layers in the Composite texture, making it possible to fine tune a composite at a specific point in the layer stack. Layers can be grouped into isolated or non-isolated groups, which generate the output of the group using a transparent background and the current state of the texture stack, respectively. Output AOV nodes have also been reworked to make them layer-based. Other changes include the option to add fog, blur, lens flares and chromatic aberration to renders as post processes, reducing calculation times at the expense of absolute accuracy. The update also introduces animation time transforms, intended when bringing two or more animated files together; and improves output of the rounded edge shader on large angles. The Windows and Linux editions are compatible with 64-bit Windows 7+ and Linux, and require a CUDA 10-capable Nvidia GPU. The macOS edition, Octane X, is compatible with macOS 10.15.6 to macOS 12 on Macs with AMD GPUs, or macOS 13+ on Macs with Apple M1/M2 GPUs. The software is now rental-only, via Otoy’s Studio+ subscriptions, which cost €23.95/month, and which include integration plugins for 21 DCC applications, plus a range of third-party software. Otoy also provides free ‘Prime’ editions of both OctaneRender and Octane X, which are limited to rendering on a single GPU, and which come with a smaller set of DCC integration plugins. https://render.otoy.com/forum/viewtopic.php?f=24&t=82229 HeliumX AI HeliumX AI is HeliumX’s latest After Effects plugin for motion graphics artists, following Helium, its toolset for creating 3D animations inside After Effects, and free edition HeliumX Lite. HeliumX AI makes it possible to generate images inside After Effects, either for use as inserts or background plates, or as seamlessly tiling textures. It runs locally, making use of the user’s GPU, and works from either text prompts or source images from an existing comp: either a single still, or a sequence of frames. Image generation is based on Stable Diffusion XL via the InvokeAI backend, and supports ControlNet for finer control of the results. As ever, before using Stable Diffusion-based tools, be aware of the wider issues: The Verge summarised the ethical and copyright issues at the time of its launch, and in several later stories. HeliumX AI is in beta. It is compatible with After Effects 2020+ on Windows 10+ only, and requires a compatible GPU: Helium recommends a NVIDIA GeForce RTX 30 Series or 40 Series. To use it, you need to have installed InvokeAI. The plugin is available via the installer for the free trial edition of Helium. HeliumX tells us that it can be used for free indefinitely. https://heliumx.tv/heliumx-ai/ Twinmotion 2023.2 The update moves the software to the same foundation as Unreal Engine 5.3, the latest version of the game engine and real-time renderer. The updated engine has a number of new features, the major one being support for Lumen, Unreal Engine’s real-time global illumination (GI) system. The software now has two real-time rendering modes: Standard, which uses a volumetric approach to ambient lighting, and Lumen, which is a surface-based approach, using ray tracing to calculate one light bounce and one reflection bounce. Lumen generates better-quality renders, but uses more CPU/GPU resources and RAM. It provides a middle option between the old real-time rendering and the Path Tracer, which provides even greater render quality, but uses even more resources. In the initial release, Lumen is not available in VR mode, or on the cycloramas or LED walls used in virtual production, although the latter should be supported “soon”. Twinmotion 2023.2 also moves the software to real-world light values for sun intensity. As a result the software’s auto-exposure algorithm has been adjusted to work with a larger range of exposure values, with the total range increased from 4 stops to 26 stops. In addition, new Local Exposure settings have been added to help preserve shadow and highlight details in scenes with high dynamic ranges. Other changes include a new Basic glass material, intended as a faster-rendering alternative to the more fully featured Standard and Colored glass materials for scenes that contain a lot of clear glass. There are also improvements to water materials, which now correctly render path traced transparency; and to snow and rain effects, which respond better to wind speed and direction. Another major change in Twinmotion 2023.2 is support for animated objects. Users can now import animations in FBX or glTF/GLB format for rendering inside Twinmotion, previewing the results using the video controls or the scrubber. Both static and skeletal meshes are supported, so you can import 3D characters, but not currently blendshape-based facial animations, or animated cameras and lights. In addition, the update adds native support for Adobe’s .sbsar format, making it possible to import materials created in apps like Substance 3D Designer and edit them inside Twinmotion. Materials only remain procedural when imported directly, not via Datasmith; and textures have a maximum resolution of 4,096 x 4,096px. The release also updates Twinmotion’s Paint and Scatter tools, used to dress scenes with instanced objects. It is now possible to scatter any simple static mesh from the Twinmotion library, including meshes from the Epic Games-owned online libraries Quixel Megascans and Sketchfab. Assets from Sketchfab can now be imported with the model hierarchy intact. Epic Games has also replaced the old non-commercial free trial of Twinmotion, which capped exports at 2K resolution, but which was otherwise fully featured, with a new Community Edition. It’s still free, but as well as limiting resolution, it lacks access to online collaboration system Twinmotion Cloud. Twinmotion 2023.2 is available for Windows 10+ and macOS 12.x. Integration plugins are available for CAD and DCC apps including 3ds Max 2017+, SketchUp Pro 2019+ and Unreal Engine 4.27+. New licences now cost $749, up $250 from the previous release. The software is free for students and educators, and there is also a free Community Edition of the software which caps export resolution at 2K, and lacks access to Twinmotion Cloud. https://www.twinmotion.com/en-US/news/twinmotion-2023-2-is-here https://dev.epicgames.com/documentation/en-us/twinmotion/twinmotion-2023.2-release-notes Flame 2024.2 The update introduces a new workflow for subtitles, making it possible to create or edit SubRip (SRT) subtitles directly in the timeline, and to export MP4 files with H.264 compression. Flame 2024.2 is available for Rocky Linux 8.5/8.7 and macOS 12.0+ on a rental-only basis. Since Flame 2023, the cost of subscriptions has risen to $610/month, up $30/month, or $4,870/year, up $235/year. Flare 2024 and Flame Assist 2024 are also available for Rocky Linux 8.5/8.7 and macOS 11.1+. Single-user subscriptions now cost $2,595/year. Lustre 2024 is only available on Rocky Linux 8.5/8.7. A single-user subscription now costs $4,870/year. https://help.autodesk.com/view/FLAME/2024/ENU/?guid=GUID-3B373CA8-B0C7-4CBE-8632-28F963E27324 https://www.autodesk.com/support/technical/article/caas/sfdcarticles/sfdcarticles/Deprecation-of-Support-for-Sparks-Questions-and-Answers.html
-
Maxon shooting themselves in the foot: Educational Licenses Suspended.
HappyPolygon replied to No One's topic in Discussions
I think a retro-fied version of 2024 wouldn't really matter for schools. If the curriculum doesn't involve advanced/expert levels then a version like R12-R14 isn't bad at all. BodyPaint, Tracker, Sculpting, Pyro, Python are too specialized if you're teaching modeling and animation. And I guess they teach other apps also so those fields will be covered by others like Nuke, Substance, ZBrush and Redshift... -
Why "Density" and "Mass" values in Dynamics instead of "Weight"?
HappyPolygon replied to MJV's topic in Discussions
Actually the "marbles experiment" is something like the "Schrödinger's Cat" analogy. It's a nice visual/thought experiment but most professors forget to mention that the analogies deviate a lot from what the theories try to describe... Why haven't you just tried to simulate this as if it were real planets in space ? You wouldn't have to deal with cloth simulations at all. Just put an Attractor at the center of each planet/sun with different strength, each attractor attracts all other objects except the planet it is a child of. -
It would be interesting to see the Mass/Density in action if collisions between marbles was disabled (ghost race) all marbles start from the same position with the same velocity each marble is slightly denser and smaller in size let's find out which will reach the center first and which last. Apparently C4D is bad at distributing mass/density on instances because the tag is assigned on the Cloner not the clones.
-
I'd also like to know what the general scaffold for making a recursion is for nodes. In programming you can call a function from the function itself but in nodes you can't connect an output to the input of the same node.
-
I've tried to replicate Nosemans' Circle Packing (because it's x10 simpler) in 2024 but didn't work.... Either something broke or something changed in SceneNodes since that video.
-
I think this idea is good for teaching about arrays. (could be split to 3 parts) Random Walk Spline Parameters: Points (integer) - The number of points added to the end of the spline Length (integer) - The distance between two consecutive control points Variance (%) - A factor to vary the distance Freedom Space X, Y, Z (MIN angle, MAX angle) - Limits the space in which the spline travels, for example (0,360) is random all over the place, (0,0) will travel along the (1,1,1) vector. Inspired by this very old plugin https://aescripts.com/umami/
-
Here's a fun idea. Outline Selection. Can't reproduce in 2024 because of bugs to the invorved capsules. I'm documenting them right now.
-
This reminds me of an old Plugin working on R19 that was similart to Volume Builder but for Splines. This effect was very easy to achieve. I searched the internet all night and couldn't find it 😭 Found it
-
: dead link
-
Node Packing 😛 What is the second Circle Spline used for ?
-
How to freeze C4D with one simple move: Severe the ...} - Selection connection C4D can't handle me
-
I thought Named Weights were to be used as an input for the Vertex Map but you have it empty. Actually I thought that was the single most important port that made the whole thing work. What's its purpose then ?
-
It seems I have found that Scene Nodes are very customizable... I'm not sure if this is good or not... Definity good for someone who already knows how things work but for anyone else everything is hidden... What is actually annoying is that most output ports are visible but most input ports are hidden! Maybe there should be an Easy/Hard mode in the View menu. Easy is having all possible ports exposed so users have an idea what is the most a node can handle. Hard with most essential ports exposed. Where is the color legend ? Still confused on what colors correspond to what data types and what it means if It's a hollow circle or 4 rectangles. What a Vector4D does, why there is no Vector3D, and what kind of data type is the damn Weights 'cause right now only String makes the Average node work....
-
never mind found what makes that node change appearence ... So... since the file link is broken I tried to reconstruct it myself... Nothing worked of course, I didn't have high expectations of my copping ability... Maybe there is something about the port data type that is not visible in the screenshots... What I enjoyed the most was finding the most OCD satisfying layout looks like an electronics schematic !
-
My Aggregate Average node doesn't look like yours ... Did you gave the name to some other node ?
-
The Poke article must have a broken download link... I don't know if the other articles have the same problem...
-
Someone had asked for a hypercube setup a few months ago. 3 days ago someone made a YT video tutorial with XPresso
-
3Δ.mp4
-
Wrong approach on distributing instances. You should have used the Spline Effector Notice the order of Effectors applied on the Text Untitled 4.c4d
-
I have questions (concerning the Poke article): Is this realy a construct of how a Field works ? 2. What is the "Operation" port ? 3. You use the Iterate Collection to "Iterate over all selected Polygons" but you connect it with the Vertex Indexes not Polygon Indexes... Then you iterate the previous iteration of the indexes to to iterate over all points ? vertex = point in the first place. 4. If I understand correctly the above setup converts a vertex weight map to a polygon vertex map ? 5. Am I asking too much if I were to suggest Maxon the following? : Add a node that read/pass and evaluates iteratively weights in Vertex and Polygon modes. That setup bundled in one Node so it can plug to any operation like Poke that doesn't support Fields. (needs some extra mechanics to implement to Neutron but it's not a big deal)
-
Thanks ! Don't know how I missed that...