Jump to content

HappyPolygon

Registered Member
  • Posts

    1,898
  • Joined

  • Last visited

  • Days Won

    94

Everything posted by HappyPolygon

  1. If you select the polygons that do not see the camera and reverse their normals and then set the projection to Front then the projection will affect only the outward facing normals. But no, I couldn't find a Camera Projection Polygon selection for automatically selecting polygons seen by the camera. I've suggested Maxon adding a functinality like this using fields.
  2. This kind of effect is usually faked using the ChanLum shader. I've never seen anyone do it with hair in C4D natively. You don't have to share the file, a simple screenshot of how the hair looks on the human and then on the sphere should give us more information.
  3. Any idea how this type of projection render was made ? aBd4Lp1_460svav1.mp4 Could it be just a 360 spherical camera or was it a baked reflection of a sphere reflecting everything that is depicted on the screen ? Can C4D bake an animated texture like that ? Share any cool technical trivia about it.
  4. It's a forum limitation. You can't delete for the same reason you can't delete this thread. You need to contact an admin. 😔
  5. I have a SATA to USB cable and a case for internal hard drives with SATA also connected to USB. So I never look explicitly for external hard drives anymore (both cost and security), I prefer my data easily accessible via USB. I didn't know about this term, thanks 😊 Yeah, this is mostly what I'm concerned with... From what I've understood so far is that SSDs are suited mostly for reading data not re-writing. But most people will use SSDs as the main booting device because it's fast completely disregarding the fact that Windows or any OS is constantly re-writing (cache, temp files, browser files, OS updates, registry).
  6. To me it's more logical to find a plugin for the other app since that has the import problem. What is it ? Blender?
  7. Any thoughts about what should change ? How else could we texture ?
  8. I've only found this but he does it in post. Maybe it's just an inverse Fresnel.
  9. Over the last 9 months I've purchased 3 asset libraries from CGAxis (1 model, 2 textures). It's over 200 GB in total and it's going to add up significantly if I purchase more bundles in the future. What of the following do you recommend for keeping them safe? Flash drive for each separate library SSD for all HDD for all The criteria for choosing are the following: Transfer (read) speed via USB Cost Lifetime of medium Assets are packed in separate zip files per category (5-10 GB each). Should I unzip them or keep them as they are and just unzip the ones I need for any current project ? (Haven't unzipped any yet, probably not going to change the size much since all textures are JPGs. I'll probably keep the model library zipped because C4D files are natively large) Most textures are provided in 4K and 8K (almost double in storage size). Should I download the 8K or rely on some AI resize in the future ? The libraries are 1000+ Models Bundle, Physical Summer Bundle and Physical Special Bundle.
  10. Fused 2023.3 Insydium has released Fused 2023.3, the latest update to its collection of Cinema 4D add-ons. The update includes new versions of particle and multiphysics systems NeXus and X-Particles, plant generator Taiao, terrain generator TerraformFX, and procedural modelling toolset MeshTools. NeXus gets the option to use particle properties to drive parameter values, new modifiers for object avoidance, flocking and generating cellular growth patterns, and a new simulation up-resing system. X-Particles gets the option to generate new particle effects from particle caches. In addition, all of the products are availble as perpetual licences via Insydium’s Fused Complete product bundle, as well as via subscription. Whereas a perpetual licence of X-Particles and 90-day trials of the other plugins used to cost $715, the new $1,336.50 Fused Complete option provides a full set of perpetual licences. The price of 12 months’ maintenance has fallen by $20, to $337.50, while the price of an annual subscription has risen by just over $28/year, to $526.50/year. The Fused plugins are compatible with Cinema 4D R19+, running on Windows 7+ and macOS 10.13+, with the exception of TerraformFX, which requires Cinema 4D R20+. Fused Complete costs $1,336.50 and includes perpetual licences of all of the software, while 12 months’ maintenance costs $337.50. Fused subscriptions cost $526.50/year. Prices exclude tax. https://insydium.ltd/news/latest-news/insydium-fused-2023.3-update/ https://insydium.ltd/products/whats-new-in-fused/ Grove 2.0 F12 – aka developer Wybren van Keulen – has released The Grove 2.0, a major new version of the software for generating biologically plausible tree models. The release transforms The Grove from a Blender plugin to a high-performance standalone application with add-ons for both Blender and Houdini. Other changes include better simulation of the way that trees bend under gravity, a new Stake tool for adjusting the forms of the trees generated, and a new system of UV islands for unique bark details. The Grove takes a parametric approach to generating trees, with controls that mimic the factors determining the forms of real plants, resulting in more realistic-looking models. Once the overall form of a tree has been set, The Grove fills in details by using ‘Twigs’: instanced geometry representing not only actual twigs, but leaves, flowers and fruit, sold separately to the core app. The resulting textured geometry can be exported from the user’s host software in standard file formats, including FBX and OBJ, for use in other DCC applicaitons. Users can also generate wind and growth animations for trees, which can be exported in Alembic format. The Grove 2.0 is a major update, changing the software from a Blender add-on to to a standalone application written in high-performance programming language Rust. The new Grove Core, which lacks a user interface, does all of the under-the-hood work, running tree growth simulations and generating the resulting 3D models. Users control the software via add-ons for host applications – which now include Houdini as well as Blender. The Houdini plugin makes it possible to control the form of a 3D tree via a geometry (SOP) network, although it’s officially in beta, and currently lacks the interactive editing tools of the Blender plugin. For Blender users, benefits of the new architecture include performance improvements of “anywhere from 5 to 20 times”, and the fact that simulations are now portable. Simulations are saved to the working .blend or .hip file, and move with that file. The update also reworks the way that bending of branches is simulated, causing trees to respond more naturally to gravity, and dispensing with the old Solidify and Fatigue controls. The Grove also now more accurately mimics the way that trees shed their lower branches as they mature. New tree-editing tools include Stake, which keeps trees upright until a specified height; and there have been updates to the Auto Prune system and the Attract and Deflect forces. Other changes include a new system of UV islands for painting unique details onto tree bark, supplementing The Grove’s existing seamlessly repeating bark textures. The Grove 2.0 is compatible with Blender 2.80+ and Houdini 19.5, running on Windows, Linux and macOS. The Starter edition has a MSRP of €89 (around $97). The Indie edition has a MSRP of €149 (around $162). The Studio edition has a MSRP of €720 ($783). Individual Twigs cost €9 ($10). https://www.thegrove3d.com/releases/the-grove-2-0/ https://www.thegrove3d.com/wp-content/uploads/2023/05/SoFast.mp4 https://www.thegrove3d.com/wp-content/uploads/2023/05/HoudiniPhysics.mp4 Autograph 2023.6 Released earlier this year, Autograph is pitched as a next-gen tool for motion design and visual effects. Like After Effects, it uses a layer-based compositing model, with its timeline featuring a standard layer stack and dope sheet, but it combines it with a 3D mode for viewing and manipulating 3D assets. 3D workflow is based around Universal Scene Description, and the software has its own real-time physically based renderer, Filament, which can be used to render 3D assets for integration into compositions. Autograph sells itself on ease of use, with its system of Generators pitched as a simpler alternative to a node graph for creating procedural content, and its Modifiers described as a simpler alternative to expressions. However, its major claim is to be the “first responsive design video software”, with a workflow geared towards artists who need to deliver a project at multiple resolutions and aspect ratios from a single file. It can also be used to generate multiple design or language variants of a project by connecting database data in CSV format to the software’s Instancer. The tools can be teamed to track deforming surfaces and generate matching animated UV maps, making it possible to create digital makeup or tattoo effects, or apply logos to clothing, as shown in this video. In addition, Autograph is now compatible with online GLSL shader-authoring platform Shadertoy, making it possible to copy shader code from the site and paste it into a Shadertoy Generator within Autograph. Updates to existing features include new Shape Path Generators and Modifiers for creating and manipulating 2D shapes, including Circle, Rectangle, Polygon, Star, and Trim Path. The 3D toolset gets a new Camera UV Project Modifier, and support for Catmull-Clark and Bilinear subdivision for 3D mesh primitives generated in Autograph. Improvements to camera workflow include the option to import animated cameras from DCC applications or tracking software like SynthEyes via USD format. There are also UX improvements to Project Panel, Viewer Panel and Properties Panel. Users of Autograph Studio, the full edition of the software, also get a new Python API, making it possible to automate common tasks or extend the UI with custom PySide panels. Pipeline integration changes include support for hardware-accelerated AV1 encoding on AMD, Intel and Nvidia GPUs. Left Angle also now seems to be focusing its attention primarily on Autograph. Although Artisan – its cloud-based shot review platform, launched at the same time as Autograph – is still mentioned on the website, it is no longer promoted on the homepage or listed in the online store. Autograph is compatible with Windows 10+, CentOS 7/Ubuntu 20.04+ Linux, and macOS 10.15+. The software supports Apple Silicon processors natively on macOS 11.0+. Perpetual licences of Autograph Creator, for users with annual revenue under $1 million/year, have a MSRP of $945; rental costs $35/month or $315/year. Perpetual licences of Autograph Studio have a MSRP of $1,795; rental costs $59/month or $599/year. https://www.left-angle.com/#news|id=92 https://www.left-angle.com/public/doc-autograph/dev-master/chapters/whats_new/2023.6.1.html Open Image Denoise 2 Intel has released Open Image Denoise 2, the next major version of its open-source render denoiser, integrated into CG applications like Blender, Cinema 4D and Houdini. Open Image Denoise 2.0 (OIDN 2.0), released in late May, made the formerly CPU-only denoiser compatible with current AMD, Intel and Nvidia GPUs. The 2.01 update, released last week, improves performance on Intel integrated graphics. First released in 2019, Open Image Denoise is a set of “high-performance, high-quality denoising filters for images rendered with ray tracing”. The technology is now integrated into a range of DCC tools and renderers, including Arnold, Blender’s Cycles render engine, Cinema 4D, Houdini, Modo, V-Ray and Unity, where it is used to denoise lightmaps. OIDN builds on neural network library oneDNN, meaning that like Nvidia’s OptiX GPU denoising technology – also integrated into many renderers – it uses AI techniques to accelerate denoising. But unlike OptiX, OIDN isn’t hardware-specific: while it was designed for Intel 64 CPUs, it supports “compatible architectures”, including AMD CPUs and Apple’s M-Series processors. To that list, we can now add GPU architectures: Intel previewed GPU support in Open Image Denoise last year, with the functionality becoming publicly available in OIDN 2.0. (Confession time: the update was actually released at the end of May, but we only spotted it while researching a story on Chaos’s Vantage 2.0 renderer, which incorporates OIDN.) The change should enable software developers to take advantage of the full processing power of users’ machines, and to support CPU and GPU denoising with a single code base. As with CPU denoising, GPU denoising is hardware-agnostic, being supported on AMD GPUs via HIP, on Intel GPUs via SYCL, and on Nvidia GPUs via CUDA. OIDN 2.0 supports AMD’s RDNA 2 and 3 architectures, so it should be compatible with Radeon RX 6000 and 7000 Series and Radeon Pro W6000 and W7000 Series GPUs. It also supports Intel’s own Xe-HPG architecture, used in new discrete GPUs from the Arc Pro A-Series. And it supports Nvidia GPU architectures from Volta onwards, so it should be compatible with desktop cards as far back as 2018, including GeForce RTX consumer GPUs and Quadro RTX and RTX workstation GPUs. OIDN is also supported on the integrated GPUs in Intel’s Core, Pentium and Celeron processors, with the 2.01 update improving performance on integrated graphics under Linux. Open Image Denoise 2 is compatible with 64-bit Windows, Linux and macOS. You can find a full list of CPU and GPU architectures supported on the OIDN website. Both source code and compiled builds are available under an Apache 2.0 licence. https://www.openimagedenoise.org/ Unity 2023.1 Unity has released Unity 2023.1, the latest tech stream update to the game engine and real-time renderer. It features changes throughout the software, including support for ARM-based Windows devices like new Surface Pro and ThinkPad tablets, updates to multiplayer networking, and better support for XR devices. New water-authoring tools create waves and surface currents Higher-quality hair and fur rendering Better rendering of skin and transparent objects Screen space lens flares New LightBaker backend makes it possible to edit scenes during light baking The Unity Editor is compatible with Windows 10+, Ubuntu 18.04/20.04 Linux, and macOS 10.14+. Unity is only available via subscriptions. Free Personal subscriptions can be used by anyone with revenue of up to $100,000/year. They have a non-removable splash screen, plus the restrictions listed in the comparison table of the subscription plans. Plus subscriptions cost $399/year. Pro subscriptions cost $2,040/year. https://blog.unity.com/engine-platform/2023-1-tech-stream-now-available https://docs.unity3d.com/2023.1/Documentation/Manual/WhatsNew20231.html Multiverse | USD 8.1 for Maya The update adds a new light linking editor, making it possible to perform light linking between lights in the Maya dependency graph and scene items in the USD stage. Light linking is currently supported in 3Delight, Arnold and Redshift. It is also now possible to snap Maya geometry to Multiverse compounds in the viewport. Multiverse | USD 8.1 is available for Maya 2018+ on Windows, Linux and macOS. It is compatible with Arnold 5.2+, 3Delight 2.0+, Redshift 3.0.67+, RenderMan 23.2+ and V-Ray 4.3+. The software is rental-only, with Pro Offline subscriptions starting at $270/year for interactive floating licences and $90/year for render licences. Pro Cloud subscriptions cost $30/month or $300/year. There is also a free Solo edition for indie artists, which can be used in commercial projects. https://j-cube.jp/solutions/multiverse/ https://j-cube.jp/solutions/multiverse/docs/setup/release-notes.html#v8-1-0-2023-06-30 Wonder Studio Wonder Dynamics has launched Wonder Studio, its much-hyped AI-powered online platform for inserting 3D characters into video footage, after several months in closed beta. The platform “automatically animates, lights, and composes CG characters into a live-action scene”, tracking and replacing an actor in the source footage with a 3D character, even matching the facial animation. It supports custom characters created in Blender and Maya, and can export clean plates, roto masks, and tracking and motion-capture data for use in previs, motion graphics and VFX pipelines. Wonder Studio got a lot of attention in the mainstream tech press when it was announced earlier this year. In part, that was due to the names associated with Wonder Dynamics: the firm was co-founded by actor Tye Sheridan, while the advisory board includes Steven Spielberg, who directed Sheridan in Ready Player One. However, the buzz also came from an impressive demo reel (embedded above), and the fact that Wonder Studio is designed to slot into existing previs and VFX workflows, not to replace them. According to Wonder Dynamics, the platform’s AI automates “80%-90% of manual VFX work”, particularly repetitive tasks like roto, leaving artists more time to focus on creative decisions. s well as simply rendering out the resulting video, which can be done at up to 4K resolution, Wonder Studio can also export the data that was used to generate it. That includes clean plates and alpha masks for the actor, both exported as a sequence of PNG images; motion-capture data in FBX format; and for Blender users, a complete Blender scene file. As well as the maximum poly count for 3D characters and the 4K render resolution limit, Wonder Studio has a number of technical limitations. The biggest is that it doesn’t currently support character-character or character-object interactions, limiting the scope of the shots that it can process. It also only has partial support for occlusion, so the quality of the output may drop where a character is obscured by foreground objects in the shot. However, it already looks a promising solution for previs and less demanding VFX projects – and a clear indication of where AI film-making tools could go in future. Wonder Studio is browser-based, and runs in Chrome or Safari. It does not currently support mobile browsers. The platform is compatible with Blender 3.3.1+ and Maya 2022+. Free accounts have a maximum export resolution of 1080p. Lite subscriptions are priced at $29.99/month or $299.88/year and can also export mocap data. Pro subscriptions are priced at $149.99/month or $1,499.88.year, raise the maximum export resolution of 4K, and also make it possible to export clean plates, roto masks and the Blender scene. All account types permit commercial use, although the Terms of Use give Wonder Dynamics a non-exclusive licence to use any content created via the platform, including to develop its AI models. https://wonderdynamics.com/ https://help.wonderdynamics.com/intro-to-wonder-studio/introduction/faq Move.ai Move.ai has introduced monthly pricing for its promising AI-based markerless motion-capture service, which lets users extract motion data from video footage of actors. The $50/month subscriptions provide 1,200 processing credits per month, equivalent to 20 minutes of video. In related news, Move.ai has also announced that Move One, its new system for extracting motion-capture data from single-camera footage, will be available in beta next month. In related news, the firm has also announced Move One, a new system that will enable users to extract motion-capture data from footage captured using a single camera. It will be rolled out in beta in August, initially on iPhone. Registering for a free account on the Move.ai website enables you to trial experimental mode on two minutes of footage, after which, processing video requires a subscription. Paid subscriptions cost $50/month, which provides 1,200 credits per month for processing video, or $365/year, which provides 1,800 credits per month for processing video. Credits are consumed at a rate of one per second of video per person animated. Processing more video requires extra credits, which cost $0.07 each for users with monthly subscriptions, or $0.04 each for users with annual subscriptions. You can find more details in Move.ai’s Fair Use Policy. https://app.move.ai/billing/subscription-management Chaos acquires AXYZ design The buyout is Chaos’s latest within the architectural tools market, following its acquisition of Corona and merger with Enscape. In this case, the software it has bought complements, rather than competes with, its own products, which include production renderer V-Ray and real-time renderer Vantage. There’s no mention in Chaos’s announcement of AXYZ design’s other major product line, its metropoly range of posed and animated scanned human characters. AXYZ design characters are available as stock content in a number of rival architectural rendering tools, including Lumion and Twinmotion. We’ve contacted AXYZ design to ask whether the metropoly characters will remain available, and whether it will continue to supply characters to other software developers, and will update if we hear back. https://www.chaos.com/blog/chaos-acquires-axyz-design-and-its-4d-animated-human-technology-for-archviz https://secure.axyz-design.com/en/blog/1077-246-axyz-joins-chaos Skybox AI Model V2 Although it’s a small change in version number, it’s a significant update, rolling out the platform’s AI Model V2, which increases the visual fidelity of the images generated. As well as generating images with more fine detail and better volumetric lighting, the new AI model has a “vastly better understanding” of creative prompts. You can see side-by-side comparisons of the old and new AI models in the video above. Skybox AI is currently available free in beta. It’s browser-based, so it should run in standard modern web browsers, although some features are only available on desktop machines or larger tablets. Blockade Labs hasn’t announced a final release date or pricing yet. https://skybox.blockadelabs.com/ Phoenix 5.2 They’re primarily performance updates, making simulations “up to 20% faster” than Phoenix 5.1. Specific changes include the option to run the PCG Fluidity method for smoke and fire simulations on the GPU rather than the CPU, and updates to the FLIP solver, cache generation and particle previews. The is also a new Simulation Speed rollout for troubleshooting performance, showing which phases in the simulation are taking the most time to compute. However, there are several new features, including the option to guide liquid simulations by using the new Directed Velocity option in the Liquid Source. Direction of flow can also be controlled with a texture map. In addition, it is now possible to vertically expand the simulation grid for ocean simulations. Changes specific to 3ds Max include the option to float or dock key rollouts anywhere in the interface, including in a new floating Main Window. There is also a new Axis Lock option to lock the rotation of Active Bodies in simulations. Maya users get support for Isosurface render mode in TexUVW, making it possible to transport texture coordinates along fluids; and better support for interactive rendering with V-Ray CPU. Phoenix 5.2 is available for 3ds Max 2018+ on Windows 7-10 and Maya 2019+ on Windows 7-10, RHEL/CentOS 7.2 Linux and macOS 10.11+. It is available rental-only. Subscriptions include both 3ds Max and Maya editions, and cost $70/month or $390/year. https://docs.chaos.com/display/PHX4MAX/5.20.00 https://docs.chaos.com/display/PHX4MAYA/5.20.00 Infinigen Researchers at Princeton Vision & Learning Lab have released Infinigen, an intriguing free tool for generating procedural 3D environments, including terrain, plants and creatures. The software, which is available to compile from source, is based on Blender, and so should export be able to assets in any format that Blender can, for use in other DCC applications or game engines. Generate 3D terrain, plants and creatures based on a set of procedural rules Developed as part of the research paper Infinite Photorealistic Worlds using Procedural Generation, Infinigen generates 3D environments procedurally using “Math rules only. Zero AI. Just graphics.” It modifies Blender primitives into environment assets via a library of procedural rules, organised into readymade generators for different asset types. There are generators for terrain, plants (and plant-like underwater objects like coral), and even creatures, including carnivores, herbivores, birds, beetles and fish. According to the paper, the system produces “high-quality” animation rigs, and can generate hair and fur, with grooming handled automatically. It can also simulate skin folding and creasing, using cloth simulation. There are 50 procedural material generators for generating textures. Combine generated assets into complete 3D environments, from mountains to oceans Infinigen includes scatter generators, distributing assets over terrain to create complete 3D environments. Environment types shown in the paper range from mountains, rivers, coasts, plains, forests and deserts to underwater scenes, caves and even icebergs and floating islands. The system simulates dynamic water using FLIP simulation, sun and sky lighting using the Nishita sky model implemented in Blender, and weather effects using Blender’s particle system. Environments are generated as full geometry – the system doesn’t ‘fake’ details using billboard textures, or even bump or normal mapping – so they will probably need optimisation for use in entertainment work. It should be possible to export the data in any format that Blender supports, including FBX, Alembic and USD, for use in other DCC applications or game engines. Infigen can generate a range of render passes, including depth, surface normals, and Cryptomatte-style ‘panoptic segmentation’, and data passes like optical flow and 3D scene flow. An integrated transpiler can also convert the underlying Blender node graph into Python code. Controlled from the command line, and may take a while to process According to the instructions in the GitHub repository, generating environments is done from the command line, controlling the type of environment generated with flags. The software is benchmarked on a fairly high-end system with two Intel Xeon Silver 4114 server CPUs and a Nvidia GPU, so the process may take a while to complete. The standard test commands shown on GitHub take “approximately 10 minutes and 16GB of memory to execute on an M1 Mac”. Currently mainly a research tool, but is intended to grow in future In its initial release, Infinigen is primarily intended as a resource for computer vision research. However, the researchers say that “in future work, we intend to make Infinigen a living project … through open-source collaboration with the whole community”. You can find out how to contribute on the project website, and planned features in the online roadmap. In the immediate future, code for water, fire and smoke simulations – not included in the initial release – is due in “mid July”. In the longer term, it may also be possible to generate cities: the researchers expect the system to expand from natural environments to built environments and artificial objects. Infinigen is available under a 3-Clause BSD licence. At the minute, the GitHub repository doesn’t provide compiled binaries, so you will need to compile it from source. The software has been tested on Ubuntu 22.04 Linux and macOS 12+. It currently requires an Apple Silicon or Nvidia GPU, but support for AMD GPUs is planned. Windows users are advised to use WSL to set up an Ubuntu terminal environments on a Windows machine. WSL is compatible with Windows 10+. https://infinigen.org/ PhotoLine 24 Although recently overshadowed by the Affinity tools, PhotoLine is a versatile alternative to Photoshop, with support for 32-bit linear workflow for bitmap images, plus both vector design and page layout features. Version 24 overhauls its layer workflow with the option to save the layer structure of a document, including the visibility and blend mode of the layers, as a Layer Composition. Layer Compositions can be used as presets for future documents, or to compare variants of a document. Workflow improvements include the option to use the Layer Tool to move or copy the contents of a selection without creating a new layer, and to search for fonts in the Layer List. It is also now possible to use 16- and 32-bit images as layer masks and clipping layers with full bit depth. Outside the layer system, new features include support for formatting marks in text. A number of the brush tools have been updated, with changes including support for transparency in the Replace Color brush, and a new Replace mode in the Copy brush. UX improvements include automatic snapping when detaching and moving document windows, and an overhaul of the look of document windows on Windows to match the rest of the UI. Pipeline integration changes include support for open-source image editor GIMP’s .xcf file format, and better support for the PDF, SVG and TIFF file formats. https://www.pl32.com/pages/rnote.php Plasticity 1.2 First released in April, and pitched as “CAD for artists”, Plasticity is a streamlined NURBS modeller. The software is intended to provide a low-cost, artist-friendly alternative to tools like MoI – which Kallen describes as a “huge inspiration” – and Fusion 360, and is aimed at concept art as well as industrial design. It features a streamlined UI, with on-screen clutter reduced by context-sensitive widgets and pop-up UI panels, and uses key bindings and selections modes that should be familiar to Blender users. Plasticity 1.2 is available for Windows 10+, Ubuntu 22.04+ Linux and macOS 12.0+. It runs on both Intel and Apple Silicon Macs. The update is free to existing users. New Indie licences cost $99 and import and export files in OBJ, STEP and Parasolid XT format. Studio licences support a wider range of file formats and cost $299. Both are perpetual node-locked licences. There is also a free trial edition, which only imports STEP files and exports OBJ files. Although there are over 30 new features in Plasticity 1.2, the focus of the update is surfacing. The initial release of Plasticity was optimised for making what Kallen describes as “mechanical-looking” models: models made up mainly from planes and primitives rather than NURBS patches. The update focuses creating pure NURBS models, introducing new tools for quickly generating patches between 3D curves, including the option to turn open curve loops into patches. More advanced features include the option to specify G0, G1, G2 continuity individually at edge boundaries. https://www.plasticity.xyz/ ZBrush 2023.2 New features in ZBrush 2023.2 include the Anchors brush. It enables users to set anchor points along the length of a SubTool, then manipulate the geometry between those points. It has a range of different modes, including Scale and Inflate, which resize the SubTool, and Move, Rotate and Twist, which repose it. The latter look to be a quick, controllable way to repose characters: in particular, making it possible to pose limbs with greater precision than is possible using existing tools. The Anchors brush being used in conjunction with the new Proxy Pose system introduced in ZBrush 2023.1, which makes it possible pose models with less-than-ideal topology. Maxon describes it as ‘version 1.0’ of the tool, so its functionality should evolve in future releases. ZBrush’s Contact feature has been updated. Originally designed as a way to make clothing move with a character when the character is reposed, it also now functions as a way to snap one piece of geometry to another. Users can draw out a transpose line between two SubTools to have ZBrush automatically move and rotate the first SubTool, using surface normals to align it to the second. By setting up to three Contact points, it is possible to control how the SubTool being moved is rotated. Finally, ZBrush 2023.2 adds two new slider controls to Spotlight, ZBrush’s projection texturing system. They control how a surface is displaced when applying an alpha, with the new Spotlight MidValue slider clamping the maximum displacement, and controlling whether the surface moves inwards or outwards. Spotlight Alpha Blur smooths the displacement. ZBrush 2023.2 is compatible with Windows 10+ and macOS 10.14+. New perpetual licences cost $895; subscriptions cost $39/month or $359/year. https://support.maxon.net/hc/en-us/articles/9352841458076-ZBrush-2023-Releases Redshift 3.5.17 Arguably the most significant change in Redshift 3.5.17 is the new MatCap Shader node, which Maxon describes as “lay[ing] the foundation for non-photo-real materials in Redshift”. At the minute, it’s rather more limited, but enables users to create simple stylised materials by mapping an image onto a mesh relative to the render camera, as shown in the image above. Other new features include the Jitter node, which makes it possible to generate colour variations across large numbers of objects that use a particular material. Suggested uses range from generating variation in environment assets like foliage or rocks to creating more stylised motion-graphics-style effects. Performance improvements include better scaling of CPU rendering performance across multiple CPU cores, or across multiple CPUs in a system. In addition, the Blender plugin now supports macOS; and Maya users get support for Maya 2024 on macOS and Linux, support for the Windows version having been added in the previous release. Redshift 3.5.17 is available for Windows 10, glibc 2.17+ Linux and macOS 11.5+. The renderer’s integration plugins are compatible with 3ds Max 2015+, Blender 2.83+, Cinema 4D R21+, Houdini 17.5+ (18.0+ on macOS), Katana 3.0v1+ and Maya 2016.5. The software is rental-only, with individual subscriptions costing $45/month or $264/year. https://redshift.maxon.net/topic/47698/version-3-5-17-is-now-available-2023-07 SpeedTree 9.5 Released over two decades ago, and now owned by Unity, SpeedTree generates textured, wind-animated 3D trees for real-time and offline work, using a combination of procedural generation and manual editing. Although there have been several different editions of the software over the years, it is currently available as SpeedTree Cinema, for visual effects and feature animation work, and SpeedTree Games. Both include SpeedTree Modeler, the actual plant-authoring software, and SpeedTree Library, a library of readymade 3D plants. SpeedTree Games also includes the SpeedTree SDK. SpeedTree 9.5: new Frond editing workflow lets you adjust parts of a leaf individually New features in SpeedTree 9.5 include the option to manipulate sections of Fronds individually. Users can now isolate parts of a leaf, like individual leaflets in compound leaves, by masking them in the Cutout Editor, then apply procedural changes. Suggested use cases include randomising the look of leaves by curling sections, or having forces like wind and gravity affect sections differently. Other new features include Projectors, a new system for adding procedural details to models. It uses ray casting to create the distribution pattern, projecting light rays into the scene, and generating details wherever the rays strike a surface. On trees themselves, that makes it possible to create natural-looking directional effects, like moss only growing up one side of the trunk, or snow settling on branches from above. It can also be used to scatter objects on the ground surrounding a tree. Other changes include support for height maps in the Leaf, Batched leaf and Frond generators. As well as being a way of adding detail to parts of a tree, IDV says that the change “allows you to identify subsections on a mesh via … the Cutout editor, eliminating the need for external modeling software”. The paint tools have also been improved, with new options including brush size, falloff, and opacity margin. SpeedTree 9.4 added global lighting properties for real-time trees, new gradient and radial noise patterns in the Map editor, and the option to import cameras in FBX format to use as scene cameras. Other changes since SpeedTree 9.0 include new Decal and Mesh detail generators, for generating surface details, and the option to add perspective and orthographic cameras and save them as part of a scene. SpeedTree Cinema 9.5 and SpeedTree Games 9.5 are available for Windows 7+, CentOS 7 and Ubuntu 18.04/20.04 Linux and macOS 10.13+. The software is rental-only. Indie subscriptions, for users with revenue under $100,000/year, cost $19/month or $199/year. Pro subscriptions, for users with revenue under $1 million/year, cost $499/year for a node-locked licence, or $899/year for a floating licence. Enterprise subscriptions, for studios with revenues over $1 million/year, are priced on demand. https://docs9.speedtree.com/modeler/doku.php?id=streleasenotes Forger 2023.4 The update adds a hierarchy system, making it possible to organise more complex models by collecting component parts into groups. New sculpting features include the Sketch Sculpt brush. It’s intended to “easily create new shapes with the stroke of an Apple Pencil”, although sadly, we can’t find images of it in action. New modelling tools include standard Sweep and Lathe tools, plus Fit Circle, for adding round details to Forger models, and Set Flow, for adjusting the curvature of selected edges. The lighting toolset gets Area and IES lights, in addition to the lights added in the previous update. The camera toolset gets support for orthographic cameras, and a new default Action Point camera behaviour, in which the camera can be rotated around a point of interest. Forger 2023.4 is available for iPadOS 15.0+. The base app is free, but is limited to three active files. A paid subscription, which costs $1.99/month or $14.99/year, removes that cap and unlocks a live link to Cinema 4D. https://www.maxon.net/en/forger Howler 2022 for free The developers of hard-to-classify digital painting, animation and video processing tool Howler have made a full recent version of the app available to download for free. Anyone downloading Howler 2022, which was originally released in 2021, is encouraged to upgrade to the current version or support the project on Patreon to raise funds for future development of the software. An idiosyncratic low-cost natural media paint package with a lot of unexpected features Originally released two decades ago, Howler – originally Project Dogwaffle – is an idiosyncratic sub-$100 digital painting and content creation tool for illustration and animation work. Its core strength is natural media painting, but it throws in a number of unexpected feature sets, ranging from terrain generation and particle sculpting to video repair and rotoscoping. You can see the features added in Howler 2022 itself in this story. Howler 2022 is available free for Windows only. Anyone downloading it is encouraged to upgrade or support developer Dan Ritchie on Patreon to fund future work on the software. The latest release, Howler 2024, has an MSRP of $59.99. https://www.pdhowler.com/Download.htm USD 8.2 for Maya The update makes it possible to have both the Multiverse | USD plugin and the native USD for Maya plugin (maya-usd) loaded simultaneously in Maya. It is also now possible to transfer Maya shading networks to Katana, writing USD files containing the networks, and having them reconstructed automatically in Katana. Multiverse | USD 8.2 is available for Maya 2018+ on Windows, Linux and macOS. It is compatible with Arnold 5.2+, 3Delight 2.0+, Redshift 3.0.67+, RenderMan 23.2+ and V-Ray 4.3+. The software is rental-only, with Pro Offline subscriptions starting at $270/year for interactive floating licences and $90/year for render licences. Pro Cloud subscriptions cost $30/month or $300/year. There is also a free Solo edition for indie artists, which can be used in commercial projects. DaVinci Resolve 18.5 Blackmagic Design has released its latest updates to DaVinci Resolve, its free colour grading, editing and post-production software, and DaVinci Resolve Studio, its $295 commercial edition. DaVinci Resolve 18.5 is a major update, adding initial support for USD-based workflows in both editions of the software, plus a new AI-based relighting toolset and improved AI-based video upscaling in Studio. The Studio edition also gets new AI-based video editing tools, including AI-based audio categorisation, transcription and captioning. VFX artists working in movie pipelines based around the Universal Scene Description format get the option to import USD and USDZ files, including geometry, materials, lights, cameras and animation. There is also a basic set of tools for manipulating, relighting and rendering imported USD assets. Other new features relevant to visual effects work include Multi-merge, a new tool for managing multiple foreground sources as a composited layer stack, shown at 22:10 in the video at the top of the story. Multi-merge makes it possible to composite shots using a layer-based workflow as well as the node-based workflow available in Fusion, Resolve’s 3D compositing environment. In addition, DaVinci Resolve Studio’s native AI depth map generator is now available inside Fusion. Colorists and effects artists using Studio get Relight, a new AI-based shot relighting system. With it, artists can add virtual direction, point or spot lights to a shot, and adjust their colour, surface softness and specularity, as shown at 23:40 in the video at the top of the story. Light intensity information is placed in the alpha channel for use with any of Resolve’s existing grading tools. Studio’s existing Super Scale feature for up-resing video gets a new 2x Enhanced mode for “extremely high quality 2x output”, with manual controls to adjust noise reduction and images sharpness. The 18.5 updates also improve DaVinci Resolve’s handling of projects with missing LUTs. Rather than interrupting playback with a warning dialog, missing LUTs are now shown via an overlay at the bottom of the screen. The files can be relinked via a Missing LUTs tab in the LUT gallery. Other workflow improvements include the option to override color management settings on a per-timeline basis, and to copy color grades to all of the angles within multicam shots when flattening multicam clips. In addition, DaVinci Resolve now automatically creates the inputs and outputs required by Resolve FX effects plugins, removing the need to drag plugins as independent FX nodes. Video editors using Studio get a number of new AI-based tools for working with the audio in media clips. The software can now automatically sort clips by type (dialogue, effects or music), and can also automatically transcribe dialogue, and generate text captions in a new subtitle track. Workflow improvements include the option to specify the intervals at which timeline backups will be created on a per-timeline basis, and metadata panel support for marker subsclips. Under the hood, the edit timeline playback engine has been “hugely improved”, which should smooth playback on lower-powered systems where a project cannot be played back in real time. There is also a new dedicated render cache management window for setting the size of the data caches DaVinci Resolve generates to improve playback performance. Changes to the media formats supported include the option to export PNG and JPEG image sequences and animated GIFs; and encode support for ProRes, AV1, H.264, MP3 and AAC in MKV containers. Users can also now upload video directly to TikTok from DaVinci Resolve. There are also a number of changes to the remote monitoring toolset in DaVinci Resolve studio, including a new free app for iPhones and iPads, DaVinci Remote Monitor. DaVinci Resolve 18.5 and DaVinci Resolve Studio 18.5 are available for Windows 10+, macOS 12.0+ and CentOS 7.3+/Rocky Linux 8.6+. The updates are free to existing users. The base edition of the software is free; the Studio edition, which adds the AI features, stereoscopic 3D tools, HDR grading and more Resolve FX filters and Fairlight FX audio plugins, costs $295. https://forum.blackmagicdesign.com/viewtopic.php?f=21&t=185029 Fusion Studio 18.5 Blackmagic Design has released Fusion Studio 18.5, the latest version of its compositing software. It’s a significant update, adding initial support for USD-based workflows, a new Multi-merge tool for layer-based compositing, a new AI-based depth map tool and GPU-accelerated clean plate and anagylph tools. One of the key changes in Fusion Studio 18.5 is initial USD support. Visual effects artists working in pipelines based around the Universal Scene Description get the option to import data in USD format, including geometry, materials, lights, cameras and animation. The implementation supports .usdc, .usda and .usdz files. Another significant new feature is Multi-merge, a new tool to “connect and manage multiple foreground sources as a composited layer stack”. It makes is possible to composite shots using a layer-based workflow along the lines of that in tools like After Effects, as well as through the node-based workflow traditionally used in Fusion. Users can change the order or visibility of foreground sources via a Layer List palette in the Inspector, and edit layer properties like position, size and apply mode via the layer properties palette below. Fusion Studio also integrates DaVinci Resolve Studio’s AI-based depth map generator, which automatically generates depth maps from source footage, for use in generating effects like fog and depth of field. The release notes also list GPU-accelerated Clean Plate and Anaglyph features, although there is no more information than that, and “up to 3x faster renders” when using the splitter tool. Fusion Studio 18.5 is available for Windows 10+, macOS 12.0+ and CentOS 7.3+/Rocky Linux 8.6+. New licences of Fusion Studio cost $295. The update is free to existing users. The Fusion toolset in the free edition of DaVinci Resolve has a maximum image resolution of 16k x 16k and lacks network rendering and advanced playback capabilities. https://www.blackmagicdesign.com/support/readme/a1051ef8ed214997abd3985d58b1fb3b Inkscape 1.3 New features in Inkscape 1.3 include the Shape Builder tool, which streamlines the process of building complex vector shapes by performing Boolean operations on simpler 2D shapes. The update also adds a new Pattern Editor subdialog to streamline the process of creating repeating patterns, and improves on-canvas pattern editing. Workflow improvements include a new Document Resources dialog, which lists all of the assets currently used in a document, and the option to group fonts into Collections. Several UI elements have been redesigned, with Layers and Objects getting back its Opacity and blend mode sliders, and Live Path Effects (LPE) now combining LPE selection and editing tools in a single dialog. Canvas rendering is now multi-threaded, resulting in a “2–4× speedup” while zooming, panning or transforming objects on machines with multiple CPU cores. In addition, there is now an experimental new OpenGL renderer. It isn’t a full GPU renderer, but it reduces CPU usage, particularly when working on HDPI displays. There is a long list of smaller changes, including a rewritten PDF import dialog, better editing of page margins and bleed when designing graphics for print, and syntax highlighting in the XML editor. https://inkscape.org/news/2023/07/23/inkscape-launches-version-13-focus-organizing-work/ Ziva Real-Time 2.0 Unity has released Ziva Real-Time 2.0, its AI-based system for generating high-quality real-time 3D characters, complete with accurate soft tissue deformation. Originally known as ZivaRT, before original developer Ziva Dynamics was acquired by Unity, Ziva Real-Time has already been used in production on projects like Insomniac Games’ Spider-Man: Miles Morales. The software has now been rebranded Ziva Real-Time, and forms part of the Unity Wētā Tools product range, along with tools developed in-house at VFX studio Weta Digital, which Unity bought in 2021. Unlike Ziva Face Trainer, its counterpart for facial animation, the training process is done offline, in the Ziva Real-Time Trainer software. Users supply the data on which to train the AI model: an Alembic file showing the 3D character mesh in a range of poses, plus a FBX file showing the joint hierarchy of the character rig in the same poses. Pose shapes can be created by hand in DCC software, or come from scan data of live actors. The software also accepts range of motion animations, animation cycles and mocap clips as training data, along with cloth simulations from tools like Marvelous Designer or Houdini. As well as human and humnoid characters, the software can be used for creatures: the cheetah shown at the top of this story comes from the free sample assets available in the Ziva Dynamics online store. Once training is complete, Ziva Real-Time Trainer exports a ZRT file: a lightweight data file that be processed by the second component of Ziva Real-Time, the Ziva Real-Time Player. Available as a plugin for Maya, Unity and Unreal Engine, the Player makes it possible to drive the deformation of the character inside the host application. According to Unity, “even the most complex [ZRT] character and creature assets are under 30MB”, so they can be processed at “sub-millisecond speeds”. Ziva Real-Time Trainer is compatible with Windows 10+ and CentOS 7.6-7.9 Linux. The Ziva Real-Time Player plugins are compatible with Maya 2019+, Unity 2021.3+ and Unreal Engine 4.26+, all on Windows only. The software is rental-only, with node-locked licences costing $1,800/year and floating licences costing $1,900/year, both including licences of the Maya Player plugin. Additional licences of the Maya Player plugin cost $100/year for a node-locked licence; $200/year for a floating licence. The Unity and Unreal Engine Player plugins are free. https://docs.zivadynamics.com/zrt/release_notes.html#release-notes-2-0-0 https://zivadynamics.com/ziva-real-time Photoshop 25.0 Photoshop 25.0: new Generative Expand tool from Adobe’s Firefly generative AI toolset Adobe began integrating Firefly into beta builds of Photoshop in May, with Generative Fill. It enables users to select a region of an image to replace, then enter a text description of the content they would like to generate in its place, with Photoshop creating the AI-generated content as a separate layer. Generative Expand is similar, but it’s an outpainting rather than an inpainting tool, so rather than replacing a region of image, it extends an existing image’s borders. Users can extend the canvas using the existing Crop tool, then type in a text description of the new background they would like to generate. Again, the AI content is generated non-destructively in a new layer. You can find more information on Adobe’s approach to generative AI, including how the firm is collecting images on which to train its generative AI models, in our artist FAQs on Firefly. Photoshop 25.0 is the first time we can remember that Adobe has given a beta build of the software its own version number. The actual stable release, Photoshop 24.7, is a rather smaller update, and mainly fixes user-reported bugs, including Liquify and other filters lagging on macOS. Photoshop 25.0 is a beta build of the software, and can be installed via Adobe’s Creative Cloud app. Adobe hasn’t announced when the new AI features will be available in a stable release. Photoshop 24.7 is available for Windows 10+ and macOS 11.0+ on a rental-only basis. In the online documentation, the update is also referred to as the July 2023 release. Photography subscription plans, which include access to Photoshop and Lightroom, start at $119.88/year. Single-app Photoshop subscriptions cost $31.49/month or $239.88/year. https://helpx.adobe.com/photoshop/using/generative-expand.html https://helpx.adobe.com/photoshop/using/whats-new.html Pulldownit 5.5 for 3ds Max and Maya The release updates Pulldownit’s Shatter It tool, which pre-fractures geometry in a scene before applying dynamics, with users now able to reposition or discard shatter centres by clicking directly in the viewport. The tool is also now multi-threaded, with shattering now “8x faster” than Pulldownit 5.0 in the Maya edition, and “up to 4x faster” in the 3ds Max edition. Thininetic has also updated the Bounded behaviour introduced in Pulldownit 5.0, which controls how far the distance detached fragments can move from the crack that created them. Users can now turn fragments dynamic on a specified frame, making it possible to cause cracks to tear the surface of an object along their entire length simultaneously. Workflow improvements include the option to create or modify for parameters for several cracks at once, and updates to the dynamic solver to generate better results with the default parameters. The Maya edition of Pulldownit also now supports Maya’s Cached Playback system once their keyframes have been baked, which should improve viewport playback. Pulldownit 5.5 is available for 3ds Max 2019+ and Maya 2018+. The software is rental-only. Node-locked licences now cost €280/year (around $310/year). Floating licences cost €350/year ($390/year). https://www.pulldownit.com//versions-logs.php?idcate=3 Rokoko Headrig Headrig is described as a “lightweight, comfortable, and adjustable motion capture head mount designed to accommodate an iPhone” – although according to the FAQs, it also accommodates Android phones. At just 240g, it’s certainly lightweight: in our not terribly scientific online survey of conventional mocap helmets, weights ranged from around 600g to over 1kg. However, that includes helmets designed to accommodate GoPro or RealSense cameras, whereas the Headrig is specifically designed for phones. It can be used with any facial mocap app, including Epic Games’ free Live Link Face or Moves by Maxon, though clearly, Rokoko hopes that people will use it with Rokoko Face Capture, its own iOS app. According to Rokoko, the Headrig fits “any adult head”, adjusting via a fastening wheel and a velcro strap. The Headrig is available to pre-order now for $195, plus tax and shipping, and is due to ship in September. It can be used with any standard iPhone – but not iPhone Plus or iPhone Max models – or equivalently sized Android phone. https://www.rokoko.com/products/headrig
  11. Somehow the Connect object was responsible. I broke it to two sweeps and the material stays fine. But broke the dynamic connections in the process. c4d_core_ material_issue.c4d
  12. You could edit the UV map and Mirror U/Mirror V the selected polygons
  13. I don't have Octane to test. Does applying a Pin Material Tag on the Sweep help at all ?
  14. Is this any helpful ? The Push Apart is optional. You can test between old Bullet System and new Soft Body system (old system is faster) Simulation is faster with the 2nd cell layer deactivated cells.c4d
  15. What kind of dissolve ? The fading is a type of disintegration by itself. Ericns has a good point on limiting the maximum length of the Tracer
  16. Make a huge sphere and distribute nulls randomly on its surface. Put the Cloner under the Tracer. Make a Sketch&Toon Material, and assign it to the Tracer. Go to Render Settings -> S&T -> Lines and checl Splines. Render -> Mode -> Include and put the Tracer in there. Disable Background Blend. Animate the Sphere (simple rotate) to reveal star trails. Go to the material and set color to white and enable Opacity with Along Stroke. Render. If the opacity does not follow movement disable Invert from Along Stroke. star trails.c4d
  17. Any thoughts about Natron ? I've download it but used it only once for flipping a video from portrait to landscape. I was very impressed by the amount of effects it provides but didn't bother with any of them as I did noy have time to. https://natrongithub.github.io/
  18. I was looking for eGPUs and nothing was said about if the internal and external GPUs could cooperate on the same task. Has anyone used them both to render ? Can two or more eGPUs work on the same machine or there will be a bottleneck on the cable? Does the system provide the option to use either of them as a preferable GPU to launch an app or does the eGPU takes over everything? My laptop has a USB-C port, does that mean it supports a Thunderbolt 3 cable with a USB-C end ?
  19. 51 artists took part in the competition. Twenty five members of jury selected twenty three renders to the final stage (each place in top-five gives special amount of points: first place – five points, fifth place – one point. All points/votes are sum up).Special thanks to the sponsors who support our event and help 3D artists to create their masterpieces. Each of them is the best in their field. You can always count on their quality service, and so we are happy to recommend you their products. Would be great to see your comments and thank you all for amazing challenge. See all https://hum3d.com/ua/blog/sci-fi-industrial-zone-winners/?fbclid=IwAR3aJB0fIPqNoi6FnjvPMWFpMgeA29TD-0pzYJ98XNexF_lvTZzAWmAH3mk
  20. You've mentioned the refraction of the glass and the wine but not the refraction of the bubbles themselves ! If a medium has the same refraction index as its surrounding then it won't be visible. The fact that you are able to see the bubbles without the liquid should have made you suspicious. How can you see air bubbles surrounded by air ? The refraction indexes must be as follows glass>liquid>air=bubbles=1
×
×
  • Create New...