Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 09/04/2023 in all areas

  1. Hello, I am implementing a 2D Delaunator library (this one : https://github.com/mapbox/delaunator) into a plugin. It works well and allow to mix with a nice topology multiple 2D meshes, even dirty ones : But, the main goal I would like to reach is to keep the UV ! It's a bit of a nightmare, because there is a lot of extrapolation and compromises to do, but here is my first results : There is still work to do with the material assignments and uv mix. I have for the moment one precise usecase for this tool (to create crossroads). Do you think it's something that can interest people, maybe in other uses ? Or with other features ?
    1 point
  2. This guy has started doing tutorials on Scene Nodes. Leaving it here, since not many at Maxon are spending time making Youtube tutorials on their own Scene nodes. 😛
    1 point
  3. C4D 2023 was released Sep 7, 2022. when do you think release 2024 will drop?
    1 point
  4. Nuke 15.0 Nuke 15.0 features a number of updates to key pipeline technologies, including support for the current CY2023 VFX Reference Platform specification, and for USD 23.05. The release also introduces experimental support for OpenAssetIO, the Foundry-developed open standard for exchange of data between DCC and asset-management software, which was adopted by the Academy Software Foundation last year. The documentation for the beta describes the initial implementation as a “very basic tech preview” introduced to let studios begin testing OpenAssetIO in their pipelines. In addition, Nuke 15.0 introduces native support for the Apple Silicon processors in current Macs. The change will make Nuke the latest key application in VFX pipelines to support M1 and M2 processors natively, Autodesk now having introduced support in Maya and Arnold, and SideFX having introduced support in Houdini. The 15.0 releases also extend the USD-based 3D compositing system introduced in Nuke 14.0 last year. UI and workflow updates include a dedicated 3D toolbar in the viewer, and two-tier selections – for example, to select faces within an object – for more precise control. There are also accompanying updates to the Scanline Render system and to the GeoMerge node, used to merge stacks of objects into a single scene graph. The latter gets five new merge modes, intended to give users more control over how data is merged when using the new 3D system. The underlying USD implementation now supports the USD Python bindings, making it possible to manipulate USD data directly through Python; and gets structural changes intended to improve performance, to make it easier to inspect and filter complex scenes, and to provide greater user control in future export workflows. There are also further updates to AIR, Nuke’s machine learning framework, intended to enable users to train their own neural networks to automate repetive tasks like roto. Training times when using the CopyCat node have been reduced by “up to 50%”, with key changes including the option to distribute training across multiple machines. Nuke Studio users also get support for AIR’s Inference node as a soft effect in the editorial timeline. The update also makes more more complex Blink kernels available as soft effects, including Denoise, LensDistortion and blur effects. Support for the NDI network protocol has also been extended, with support for audio when using NDI in Monitor Out. Nuke 15.0 is currently in beta. It is compatible with Windows 10+, Rocky Linux 9 and macOS 12.0+, and supports Apple Silicon processors natively. Nuke 14.1 is also currently in beta, and is compatible with Windows 10+, CentOS 7.4-7.6 Linux and macOS 12.0+. It supports Apple Silicon using Rosetta emulation. To access either beta, you will need a Nuke, NukeX or Nuke Studio license with current maintenance. The beta period is expected to last until the end of the year. https://www.cgchannel.com/2022/12/foundry-releases-nuke-14-0/ https://www.foundry.com/products/nuke/beta/15.0_14.1 Radical Live The first major change to Radical’s services since Autodesk invested in the firm last year, Radical Live makes it possible to extract motion from video in real time as well as offline. Users can shoot video footage of an actor – the service works with a “consumer-grade camera”, and supports simultaneous capture of multiple actors – and stream it to the cloud, where it is processed on Radical’s servers. The resulting motion data can be previewed on standard desktop or mobile systems, or streamed into Blender, Maya, Unity or Unreal Engine in JSON format. Users can record live data to the software’s timeline, with the option to retarget the motion to a custom 3D character in Maya or Unreal Engine. Retargeting is “coming soon” to the Blender and Unity plugins. Radical has also simplified its pricing, with both Live and the original Core service available through a single $96/year Professional subscription, which provides 36 hours of play time per year, usable across either service. Free subscriptions provide 24 hours of play time per year, but you can only preview the motion: you don’t get FBX downloads or the DCC integration plugins for Radical Live. Radical Live can be used on any device that “can stream video and run a web browser” – Radical specifies the Chrome browser – including Windows, Linux and macOS systems, and iOS and Android mobiles. Integration plugins are available for Blender 2.83+, Maya 2022, Unity 20219-2022 and Unreal Engine 5.1+, with a plugin for Omniverse “coming soon”. https://radicalmotion.com/products/live https://www.youtube.com/@radicalmotion2207 Luxion buys Digizuite Luxion has completed its acquisition of digital asset management provider Digizuite. The financial terms of the deal have not been disclosed. Luxion will now “[integrate] cutting-edge DAM capabilities into its portfolio”, which includes the KeyShot renderer, widely used for product design and visualization. Luxion and Digizuite will now integrate operations to bring users “enhanced products and services … to solve our customers’ biggest content management challenges”. The announcement doesn’t go into detail on how, or whether, Digizuite’s technology will be integrated with KeyShot, but in July, when the buyout was first announced, Thorsgaard said that the firm aimed to provide “a full-service design solution, from ideation to asset management”. He commented that designers are “juggling dozens of tools” and facing “increasing production demands”, and that Luxion aimed to enable them to “focus more of their energy into their creative work”. At the time of the original announcement, Digizuite said that the acquisition would not result in any changes to customers’ existing “relationships or agreements”. KeyShot is available for Windows 10+ and macOS 11.7+. Integration plugins are available for a range of DCC and CAD tools, including 3ds Max, Blender, Cinema 4D and Maya. The software is available rental-only. KeyShot Pro subscriptions cost $1,188/year https://blog.keyshot.com/luxion-announces-offer-to-purchase-digizuite Cinebench 2024 Maxon, developers of professional software solutions for editors, filmmakers, motion designers, visual effects artists and creators of all types, is thrilled to announce the highly anticipated release of Cinebench 2024. This latest iteration of the industry-standard benchmarking software, which has been a cornerstone in computer performance evaluation for two decades, sets a new standard for performance evaluation, embracing cutting-edge technology to provide artists, designers, and creators with a more accurate and relevant representation of their hardware capabilities. Cinebench 2024 ushers in a new era by embracing the power of Redshift, Cinema 4D's default rendering engine. Unlike its predecessors, which utilized Cinema 4D's standard renderer, Cinebench 2024 utilizes the same render algorithms across both CPU and GPU implementations. This leap to the Redshift engine ensures that performance testing aligns seamlessly with the demands of modern creative workflows, delivering accurate and consistent results. Cinebench 2024 reinstates GPU benchmarking, a feature absent from Cinebench for the past decade. The latest version not only evaluates CPU performance; it also provides insights into the GPU's capabilities, reflecting the evolving technological landscape of creative software and workflows. Cinebench 2024 is designed to accommodate a broader range of hardware configurations. It seamlessly supports x86/64 architecture (Intel/AMD) on Windows and macOS, as well as Arm64 architecture to extend its reach to Apple silicon on macOS and Snapdragon® compute silicon on Windows, ensuring compatibility with the latest advancements in hardware technology. Redshift GPU performance can be evaluated on systems with compatible Nvidia, AMD and Apple graphics processors. Cinebench 2024 streamlines the benchmarking process by utilizing a consistent scene file for both CPU and GPU testing. This innovation enables users to discern the advantages of leveraging Redshift GPU, providing a real-world glimpse into the benefits of harnessing cutting-edge graphics hardware for rendering tasks. Cinebench 2024 introduces a revamped user interface that enhances the user experience and showcases the incredible artistic endeavors achieved with the Redshift render engine in Cinema 4D. This dynamic interface serves as a testament to the potential of the Redshift engine while offering users a more intuitive and visually engaging experience. Beyond the surface, Cinebench 2024 brings forth a host of performance-enhancing features. With a threefold increase in memory footprint compared to Cinebench R23, the software caters to the memory-intensive demands of modern projects. Moreover, a six-fold rise in computational effort and utilization of newer instruction sets ensures a benchmark that resonates with the complexity and sophistication of contemporary creative projects. It's crucial to note that Cinebench 2024 scores cannot be directly compared to those of its predecessor, Cinebench R23. With the incorporation of Redshift, a different rendering engine, larger memory footprint, and more complex scenes, Cinebench 2024 offers a distinctly enhanced and accurate evaluation of modern hardware capabilities. Cinebench 2024 is immediately available for download starting today on the official Maxon website. Creatives, gamers and technology enthusiasts are invited to explore this groundbreaking benchmarking tool and experience firsthand the leap in performance evaluation that it offers. https://www.maxon.net/tech-info-cinebench https://www.maxon.net/en/downloads/cinebench-2024-downloads Unreal Engine 5.3 One of the biggest changes in Unreal Engine 5.3 is the option to import simulations from DCC applications like Houdini in OpenVDB format and render them in Unreal Engine. Unlike third-party plugins for importing OpenVDB files, which convert the imported data to NanoVDB, Unreal Engine converts the data into Sparse Volume Textures (SVTs). Like UE4’s Volume Textures, SVTs can be indexed with 3D UV co-ordinates, but use “much less” memory. SVTs are supported in the Deferred Renderer as Heterogeneous Volumes for smoke and fire, or as Volumetric Cloud or Volumetric Fog. “More complete support” is available when rendering volumes with the Path Tracer, which can simulate scattering, self-shadowing and GI. Another major new feature in Unreal Engine 5.3 is the Skeletal Editor, intended for rigging characters or editing skin weights directly inside Unreal Editor, rather than having to do so in DCC applications. Users can convert Static Meshes to Skeletal Meshes, add or edit bones, and paint skin weights, with support for standard painting options like Flood, Relax and Normalize. Animators also get updates to the Curve Editor and Smart Bake system, and workflow improvements for animation retargeting. Unreal Engine 5.3 also includes a new Panel Cloth Editor, with support for XPBD (Extended Position Based Dynamics) constraints as well as the existing PBD constraints. Simulation metrics are no longer baked into the draped pose, so simulation quality should be improved even when using PBD constraints. The new editor uses a non-destructive workflow, replacing masks with reusable weight maps, and supports cloth-flesh interactions and Level Set Volume (LSV) collisions. According to Epic, it lays the foundation for both real-time cloth editing, and a “more VFX-oriented” approach based around simulation caching, enabling users to trade simulation time against accuracy. Architects and visualization artists get proper support for orthographic rendering. Unreal Engine has had an Orthographic mode for cameras for some time, but Epic describes it as having been “impractical to use” due to the number of rendering features not supported. In Unreal Engine 5.3, it supports “most modern features”, including dynamic GI system Lumen, Nanite virtualised geometry, shadows, and Temporal Super Resolution (TSR) render upscaling. Artists using Unreal Engine for previs or virtual production work get a number of new features, including a new Anamorphic Lens Model and Anamorphic Lens Solver. The changes should make it possible to render the CG elements of a shot in real time with distortion matching that of live-action footage shot using anamorphic lenses. Other key changes include initial support for Unreal Engine’s virtual camera system on macOS, and CineCamera Rig Rail, for setting up rail camera shots inside Unreal Engine. Unreal Engine 5.3 also features further updates to virtualised geometry system Nanite, dynamic lighting system Lumen, the in-engine modelling and UV tools, and to the other simulation toolsets. Nanite gets support for explicit per-vertex tangents, which should improve results with low-poly models, and can now be enabled in Landscape Actors to boost runtime performance for large landscape assets. Lumen now supports more than one bounce when rendering reflections using hardware ray tracing, and Lumen Reflections can be used without Lumen GI, to improve the visual quality of levels with static lighting. The in-engine modelling toolset gets a reworked UI and new vertex paint tools, while the UV editor can now visualise texture distortion via heatmaps. Niagara, Unreal Engine’s particle effects system, gets support for transforming fluid simulations after they are cached, making it possible to duplicate and offset caches to create more complex effects. The hair grooming toolset now supports streaming of groom and groom binding assets, and there is initial support for hair strands in the deformer graph. The new Procedural Content Generation toolset introduced in Unreal Engine 5.2 gets support for hierarchical workflows, making it possible to use multiple grid sizes when generating game worlds. In addition, support for the USD and MaterialX standards is being extended further, with the option to edit USD materials directly in the viewport and to import USD files with MaterialX shading networks. Other key pipeline integration changes include the use of the OpenColorIO (OCIO) standard for internal texture conversions. You can find a complete list of changes via the links below, including new features for developers like Multi-Process Cook, which makes better use of available CPU cores and memory when using a build farm to convert content to a platform-specific format. Unreal Engine 5.3 is available now for 64-bit Windows, macOS and Linux. Use of the editor is free, as is rendering non-interactive content. For games developed with the engine, Epic takes 5% of the gross royalties after the first $1 million generated. https://www.unrealengine.com/en-US/blog/unreal-engine-5-3-is-now-available https://docs.unrealengine.com/5.3/en-US/unreal-engine-5.3-release-notes/ Substance 3D Modeler 1.4 The biggest new feature in Substance 3D Modeler 1.4 is the reference image system. It enables users to import images while working in desktop mode, then view them in the background while sculpting in virtual reality – so far, the images are only visible in VR. The images are stored in save files, along with their positions. In addition, there are two interesting new experimental features: Headless VR mode and ambient occlusion. The first is a new sculpting mode that makes it possible to use VR controllers while working on desktop. Adobe describes it as combining “the best of desktop and VR”, but also notes that “your mileage may vary” according to the hardware you’re using. The second is ambient occlusion, to give a greater sense of depth to models. It’s currently only supported for the virtual clay, but it can be enabled in Model mode, making it possible to use while sculpting, unlike the new viewport ray tracing system added in the previous release. Both are disabled by default, but can be toggled on from the Preferences panel, along with a Live boolean preview option, which calculates Boolean operations in real time. The software’s interface has also been overhauled, rationalising the top and bottom bars. The biggest change is the new Environment panel accessible from the top bar, which can be used to adjust the position, intensity and color of Modeler’s three-point lighting. It’s also possible to toggle the key, fill and rim lights individually from the panel, or choose whether they cast shadows. The Warp tool gets five new falloff curve settings. The image above shows the range of forms that can be sculpted when using them with sphere, cube and capsule brushes. In addition, multitouch is supported on Wacom graphics tablets and pen displays, making it possible to change the camera view using gestural controls. You can find a full list of changes via the link at the foot of the story. Substance 3D Modeler 1.4 is compatible with Windows 10+. It is available via Substance 3D Collection subscriptions, which cost $49.99/month or $549.88/year. Perpetual licences are available via Steam, and cost $149.99. https://helpx.adobe.com/substance-3d-modeler/release-notes/v1-4-release-notes.html https://helpx.adobe.com/content/dam/substance-3d-modeler/release-1-4/ReferenceImages.mp4 Substance 3D Sampler 4.2 Key changes in Substance 3D Sampler 4.2 include a new version of Image to Material, its AI-trained system for generating PBR texture maps from a single source image. You can read more about the toolset, which can generate base color, roughness, metallic, normal and displacement maps from photos, in this story, from back when the software was still called Substance Alchemist. The new version has been “trained on all material types”, with users now able to choose specific algorithms matching a range of common materials. Those shown in the video above include asphalt, ceramics, leather, metal, paint, paper, plastic and rubber, and stone, plus general categories for organic and ground materials. According to Adobe, the new algorithms give better results with many materials, with the release notes specifically namechecking fabric, plaster and wood. The update also adds an entirely new AI-trained feature, the Upscale layer. As its name suggests, it upscales textures – it works with the base color, normal, height, roughness, and metallic channels of a material – with the AI filling in missing details. It’s possible to increase the resolution of a texture by 2x or 4x: there doesn’t currently seem to be an option to achieve a specific target resolution. The update also adds a new preference to enable or disable GPU-accelerated neural networks, presumably to control performance when working with the new AI features. Other changes in Substance 3D Sampler 4.2 include a new layer resolution system, with layers taking the resolution of the document itself, or from the layer below. The UI displays the dimensions of each layer in pixels, to help visualize the impact of edits on the resolution of a material. In addition, the Crop filter now supports dynamic output resolution, and the Delighter filter – which removes baked-in shadows and lighting from source photos – has been “vastly improved”, although the release notes don’t specify what has changed. Substance 3D Sampler 4.2 is available for Windows 10+, CentOS 7.0+/Ubuntu 20.04+ Linux and macOS 11.0+. New perpetual licences, available via Steam, cost $149.99. The Windows and macOS editions are also available via Adobe’s Substance 3D Texturing subscriptions, for $19.99/month or $219.88/year; or Substance 3D Collection subscriptions, for $49.99/month or $549.88/year. https://helpx.adobe.com/substance-3d-sampler/release-notes/version-4-2.html Vantage 2.1 Vantage 2.1 adds support for DLSS 3.5, the latest version of Deep Learning Super Sampling, NVIDIA’s AI-based image reconstruction technology. DLSS began as an image upscaling system, enabling applications in which it was integrated to increase viewport interactivity by rendering each frame at a lower resolution, then upscaling the image to viewport dimensions, but now also supports frame generation, further increasing frame rates by generating intermediate frames. To that, DLSS 3.5 adds Ray Reconstruction, a new AI-trained render denoising system intended to supersede traditional denoisers like NVIDIA’s own OptiX AI for “intensive ray-traced scenes”. Although DLSS 3.5 is due to be integrated in architectural renderer D5 Render and NVIDIA’s own Omniverse, Chaos describes Vantage as the “first non-gaming application” to support it. Ray Reconstruction makes panning the camera “smoother and faster” – in the video at the top of the story, the viewport render becomes much more stable when navigating the scene. However, reflections become blurrier, so users can switch to one of the other denoisers supported in Vantage, including OptiX AI, to improve image quality when the camera is static. DLSS Ray Reconstruction is available on compatible NVIDIA RTX GPUs: those from 2018’s GeForce RTX 20 series and newer. Other new features in Vantage 2.1 include support for refraction glossiness, improving the accuracy with which materials like frosted glass can be recreated. The release also introduces experimental support for Intel Arc GPUs. The change makes Vantage compatible with discrete GPUs from all of the major manufacturers, Vantage 2.0 having added support for AMD cards. Chaos Vantage is compatible with Windows 10+ and DXR-compatible AMD, Intel or Nvidia GPUs. The software is rental-only, with subscriptions costing $108.90/month or $658.80/year. It is included free with V-Ray Premium and Enterprise subscriptions. https://docs.chaos.com/display/LAV/Chaos+Vantage%2C+v2.1.0 Affinity Photo 2.2 Serif has released Affinity Photo 2.2, the new version of its image editing and digital painting app. The update, which is free to existing users, adds support for color-management standard OCIO 2, and makes a number of workflow improvements. The update was released alongside version 2.2 of Serif’s other Affinity products, vector design software Affinity Designer 2.2 and page layout tool Affinity Publisher 2.2. Key changes in Affinity Photo 2.2 include support for OCIO 2 (OpenColorIO 2), the latest major version of the color-management standard used in movie VFX and animation pipelines. The update also introduces extra keyboard shortcuts for pixel brushes and long-press shortcuts for tools more generally. Other workflow improvements include the option to change the color of guides and new dialogs for more precise control when using the Move or shape creation tools. The iPad edition gets preferences previously only available in the desktop edition. Affinity Photo 2.2 is available for Windows 10+, macOS 10.15+ and iPadOS 15.0+. The update is free to registered users of version 2.x. New perpetual licences of the desktop edition cost $69.99; the iPad edition costs $18.49. A Universal License, which includes all of the Affinity apps, costs $164.99. https://forum.affinity.serif.com/index.php?/topic/191643-faq-22-release-notes-improvements-and-major-fixes/ Substance 3D Connector he new Connector system will extend the existing Send To functionality in the Substance 3D applications, which makes it possible to transfer data between them directly, without having to export then re-import a file. The demo from the livestream shows both a material and a 3D model being transferred directly from 3D capture and material-authoring tool Substance 3D Sampler to Blender, with the material properties remaining editable inside the open-source 3D software. Only transfer from Sampler to Blender was shown, but Adobe says that it aims to make the system a two-way live link between the Substance 3D apps and third-party tools. Adobe says that it is currently doing internal tests with Blender, 3ds Max and Maya, but that it aims to support more DCC applications with the connector in future. The slide above from the livestream shows Cinema 4D, Houdini, Katana and Omniverse. As well as Substance 3D Sampler, the slide shows the logos of 3D texture painting app Substance 3D Painter and virtual reality modeling tool Substance 3D Modeler. Adobe aims to make the underlying library for the Connector and sample plugins available open-source “early next year”. It hasn’t announced an exact release date yet. You can find prices and system requirements for Substance 3D applications in our stories on the current releases of Substance 3D Designer, Modeler, Painter, Sampler and Stager. https://www.adobe.com/creativecloud/3d-ar.html After Effects Beta 2023 Adobe has been overhauling 3D compositing workflow in After Effects for some time now, beginning by introducing new 3D gizmos and camera controls in 2020. Last year, the firm released a beta build that made it possible to import 3D models in OBJ, glTF and GLB format directly into the software. Although that functionality remains in beta, the beta has now been expanded. The key feature of the new “true 3D workspace” is the Advanced 3D renderer, a new hardware-agnostic GPU-based render engine. Also referred to in the online documentation as the Mercury 3D renderer, it is a final-quality renderer, unlike the existing 3D Draft preview mode. As well as 3D models, it can render other 3D layers such as extruded text, and 2.5D plane layers; and supports PBR materials via the Adobe Standard Material. According to Adobe, it should let motion graphics and visualization artists to render less demanding shots directly inside After Effects without having to round-trip shots to DCC applications like Cinema 4D, a version of which is included with After Effects. Also supports image-based lighting and stylized effects The Advanced 3D renderer brings with other new features of the true 3D workspace: image-based lighting for photorealistic work, and 2D/3D workflows for stylized work. The former makes it possible to light objects in the workspace with a HDRI environment map, imported in .hdr format. The latter makes it possible to use a rendered frame from a 3D model layer as a source, in order to apply 2D effects to portions of 3D scenes. According to Adobe, users can create “highly stylized renders using effects that reference another layer, such as Displacement Map, Vector Blur [and] Calculations”. However, the renderer is a work in progress, and currently has several known limitations, including lack of support for DoF and motion blur, and issues with shadows and lighting. The new beta also features an improved version of the Roto Brush, After Effects’ rotoscoping tool. It enables users to create roto masks by roughly painting out the part of the frame to be isolated, with the software automatically generating the full mask and propagating it through the other frames of the video being rotoscoped. On paper, the new version isn’t as major an update as 2020’s Roto Brush 2.0, which applied machine learning to generate masks faster and more accurately. However, Adobe still describes it as a “major advance”, improving speed, quality of the masks generated, and stability, particularly in shots where the object being masked is occluded by foreground objects. The true 3D workspace and improved roto brush are available via a separate beta build of After Effects. Adobe hasn’t announced a final release date for them. The current stable release, After Effects 23.6, is available for Windows 10+ and macOS 11.0+ on a rental-only basis. Subscriptions to After Effects cost $31.49/month or $239.88/year, while All Apps subscriptions, which provide access to over 20 of Adobe’s creative tools, cost $82.49/month or $599.88/year. https://blog.adobe.com/en/publish/2023/09/13/new-ai-3d-features-at-ibc-2023 Photoshop 25.0 Originally released in beta in July, Photoshop 25.0 introduces a second new tool based on Firefly, Adobe’s generative AI toolset. Adobe began integrating Firefly into Photoshop in May, with Generative Fill. It enables users to select a region of an image to replace, then enter a text description of the content they would like to generate in its place, with Photoshop creating the AI-generated content as a separate layer. Generative Expand is similar, but it’s an outpainting rather than an inpainting tool, so rather than replacing a region of image, it extends an existing image’s borders. Users can extend the canvas using the existing Crop tool, then type in a text description of the new background they would like to generate. Again, the AI content is generated non-destructively in a new layer. Both Generative Expand and Generative Fill are now supported with actions in Photoshop’s contextual task bar to streamline workflow. Although the tools were only licensed for non-commercial use during the beta period, both can now be used on commercial projects, Adobe having just introduced credit-based pricing for Firefly. Creative Cloud subscriptions include a monthly quota of ‘fast’ generative AI credits, with one credit corresponding to a Generative Fill or Generative Expand operation on a 2,000 x 2,000px image. Photography plans get 100-250 credits/month, single app subscriptions to Photoshop get 500 credits/month, and All Apps subscriptions get 1,000 credits/month. Once they are used up, users can continue to use Firefly for free, but at slower speeds. From November 2023, it will also be possible to buy extra fast credits through a separate new subscription plan, with prices starting at $4.99/month for 100 Credits. Other changes in Photoshop 25.0 include an update to the Remove tool. Introduced in Photoshop 24.5, the tool, which is also AI-trained, makes it possible to do quick object removals by roughly painting over the object to be removed, with Photoshop automatically generating a new section of the background to fill the gap. In Photoshop 25.0, as well as painting over an object, you can draw a ring around it. According to Adobe, the new workflow makes it easier to replace large areas of an image, since it reduces the chances of missing out parts of the area to be replaced. In addition, Adobe has discontinued Photoshop’s Preset Sync feature, which made it possible to sync presets for Brushes, Gradients, Swatches, Styles, Shapes, and Patterns with a Creative Cloud account. That meant that presets could be transferred to a new device – for example, from a laptop to a desktop machine – when logging in to it with the same Adobe ID. The change follows Adobe’s announcement that it is discontinuing Creative Cloud Synced files across all Creative Cloud subscriptions, beginning on 1 February 2024. Adobe has also updated Photoshop on the web, the browser-based edition of the software, available as part of Creative Cloud Photoshop subscriptions. Like the desktop edition, it includes the new Generative Fill and Generative Expand tools. Other changes in the September 2023 release include a more streamlined user inteface, one-click quick actions for common tasks like background removal, and the option to invite other people to edit cloud documents. Finally, Adobe has released Photoshop on the iPad 5.0, the latest version of the tablet edition of the software, also available to Photoshop subscribers. The update makes it possible to switch between variants of an image created using Generative Fill in the desktop edition of Photoshop via the Variations panel. Other changes include the option to import and open photos from Lightroom, Adobe’s photo editing software, and updates to the color picker and color swatches. Photoshop 25.0 is available for Windows 10+ and macOS 11.0+. In the online documentation, it is also referred to as the September 2023 release or Photoshop 2024. The software is rental-only, with Photography subscription plans, which include access to Photoshop and Lightroom, starting at $119.88/year. Single-app Photoshop subscriptions cost $31.49/month or $239.88/year. https://helpx.adobe.com/photoshop/using/whats-new/2024.html https://helpx.adobe.com/photoshop/using/generative-ai-faqs-photoshop.html https://helpx.adobe.com/photoshop/using/whats-new/mobile-2024.html Twinmotion 2023.2 Created by visualisation studio KA-RA, Twinmotion was desgned to help architects with limited 3D experience to create still or animated visualisations of buildings. It imports hero models in a range of standard 3D file formats, or via live links to CAD applications. Users can then create background environments from a library of stock assets, and assign lights. Atmospheric properties – including clouds, rain and snow, and ambient lighting based on geographical location and time of day – can be adjusted via slider-based controls. The software was acquired by Epic in 2019, and initially made available for free, before being re-released commercially in 2020 with an aggressive new price point. Twimotion 2023.2 updates the software to the same foundation as Unreal Engine 5.3, the latest version of the game engine and real-time renderer. The updated engine has a number of new features, the major one being support for Lumen, Unreal Engine’s real-time global illumination (GI) system. The software now has two real-time rendering modes: Standard, which uses a volumetric approach to ambient lighting, and Lumen, which is a surface-based approach, using ray tracing to calculate one light bounce and one reflection bounce. Lumen generates better-quality renders, but uses more CPU/GPU resources and RAM. It provides a middle option between the old real-time rendering and the Path Tracer, which provides even greater render quality, but uses even more resources. In the initial release, Lumen is not available in VR mode, or on the cycloramas or LED walls used in virtual production, although the latter should be supported “soon”. Twinmotion 2023.2 also moves the software to real-world light values for sun intensity. As a result the software’s auto-exposure algorithm has been adjusted to work with a larger range of exposure values, with the total range increased from 4 stops to 26 stops. In addition, new Local Exposure settings have been added to help preserve shadow and highlight details in scenes with high dynamic ranges. Other changes include a new Basic glass material, intended as a faster-rendering alternative to the more fully featured Standard and Colored glass materials for scenes that contain a lot of clear glass. In addition, assets from Sketchfab, the Epic-Games-owned online model library, can now be imported with the model hierarchy intact. Epic Games has also replaced the old non-commercial free trial of Twinmotion, which capped exports at 2K resolution, but which was otherwise fully featured, with a new Community Edition. It’s still free, but as well as limiting resolution, it watermarks output, and lacks access to online collaboration system Twinmotion Cloud. Twinmotion 2023.2 Preview 1 is only available in the Community Edition, which Epic Games says will now be the policy for all future preview releases. Twinmotion 2023.2 is available as a public preview for Windows 10+ and macOS 12.0+. Integration plugins are available for CAD and DCC apps including 3ds Max 2016+, SketchUp Pro 2019+ and Unreal Engine 4.27+. Epic hasn’t announced a final release date. New licences cost $499. The software is free for students and educators, and there is also a free Community Edition of the software which caps export resolution at 2K, watermarks output, and lacks access to Twinmotion Cloud. https://www.twinmotion.com/en-US/docs/2023.1/twinmotion-release-notes/twinmotion-2023-2-PR1-release-notes Redshift 3.5.19 Redshift 3.5.19 extends the new MatCap Shader node added in Redshift 3.5.17. Described as “laying the foundation for non-photo-real materials in Redshift”, it creates stylised materials by mapping an image onto a mesh relative to the render camera. The latest updates adds new parameters to control the projection, including scale, rotation and whether camera or world space is used. The update also improves rendering performance on multi-core CPUs, with render buckets now running concurrently on CPUs with 12 or more threads (6 or more cores). Redshift CPU also now supports AVX2 and Embree 4.1, the last-but-one release of Intel’s CPU ray tracing library. Cinema 4D users get experimental support for automatically convering Cloner objects with instances of meshes to point clouds. The Distorter shader now supports 3D distortion using Maxon noise, making it possible to distort bump maps as well as color textures. Houdini users get support for asset picking in the Solaris viewport and the option to render packed Alembic primitives as Redshift Alembic procedurals. Blender users get “basic” support for the Jitter Node added to the Redshift core in the previous release. Redshift 3.5.19 is available for Windows 10, glibc 2.17+ Linux and macOS 12.6+. The renderer’s integration plugins are compatible with 3ds Max 2018+, Blender 2.83+, Cinema 4D R21+, Houdini 17.5+ (18.0+ on macOS), Katana 4.0v1+ and Maya 2018+. The software is rental-only, with subscriptions costing $45/month or $264/year. Blender 4.0 First introduced in Blender 2.92 in 2021, Geometry Nodes is a node-based framework for procedural modeling, object scattering – and, most recently, simple simulations. The Node Tools project lets users package some of that functionality into more conventional tools, turning groups of nodes into operators that can be accessed from Blender’s menus. By default, the new tools will appear in a new menu for unassigned operators, but it will be possible to add them to existing menus, and to set which modes they are available in. Node-based tools are intended to be a more artist-friendly way to access and control node groups than the existing Geometry Nodes Modifer, as discussed in this post on the Blender Developer blog. The video at the top of the story, posted by developer Jacques Lucke on X (formerly Twitter), shows basic examples: creating simple rotate and extrude tools. However, it should be possible to create more complex operators: the ultimate goal of the project is to make it possible to recreate any existing Edit Mode operator with nodes. Blender 4.0 is due for a stable release in November 2023. The current stable release, Blender 3.6 is available for Windows 8.1+, macOS 10.15+ (macOS 11.0 on Apple Silicon Macs) and glibc 2.28+ Linux, including Ubuntu 18.10+ and RHEL 8.0+ and CentOS and Rocky Linux equivalents. https://projects.blender.org/blender/blender/issues/101778 Shōgun 1.10 First released in 2017, Shōgun is intended for calibrating Vicon’s optical motion-capture systems, and for streaming, solving and cleaning data captured from them. It has two key components: Shōgun Live, used for capturing data, and Shōgun Post, for cleaning and exporting it. Key changes in Shōgun Live 1.10 include a new object alignment tool for aligning props. It adds new options for updating the orientation of the object within a scene, with users able to align the prop’s axes to global axes, a single marker or the center of a group of markers. There are also “significant improvements” to finger tracking accuracy and the stability of calibration. Shōgun Post 1.10 gets better integration with MetaHuman, Epic Games’ framework for creating next-gen real-time characters. Setup scripts included with the installation automate the process of retargeting data to MetaHuman skeletons in Unreal Engine 5, including setting up the hierarchy, specifiying degrees of freedom, and posing the target skeleton. It is also now possible to reprocess data from .mcp files, which store data processed in real-time: for example, to resolve frames dropped during the original processing operation. Finally, there are a number of UI changes, described as the first of a “series of iterative improvements” to the software’s interface. Shōgun 1.10 is compatible with Windows 10 and 11. The software is usually bought as part of a complete Vicon motion-capture system, which are priced on enquiry. https://docs.vicon.com/display/Shogun110/New+features+in+Vicon+Shogun+1.10
    1 point
  5. Yes ! those plants I bet are responsible. If instancing doesn't work assign a Display tag and make them cubes or skeletons. I've had many issues like this with plants in the past
    1 point
  6. check out this dudes videos. https://www.youtube.com/@thebradcolbow/videos
    1 point
  7. I've heard about that recently too on some youtuber channels. I'm wondering if they're sponsoring the tablets and paying a lot for ads. Before them all I heard a bout was Huion. They were the Wacom competitor about 2 years ago. The Kamvas Pro displays looked nice with a decent resolution. https://store.huion.com/collections/pen-display
    1 point
  8. Many thanks to HappyPolygon for these links. Who would have thought that a Sketchfab model would could be so impressive. They offered it in an FBX format as well and so I downloaded it and went to work. While not modeled in quads with any polygon flow to consider what-so-ever, the modeling was solid with excellent UV's. There was very little corrupted geometry for me to correct. I think it was game rip, so there were some details that were just not adequate as textures and needed to be replaced. The Engine intakes and radar dish needed to be replaced. Likewise, there was no landing gear and the corruption of the geometry around the landing bays was pretty bad so that needed to be replaced as well. Landing gear rigging was added. Engine glow was added. Interior cockpit lights were added. The whole model was converted to Redshift. And now the finished model: Very happy to get that monkey off my back. Now I can start on the DS tunnel. Dave
    1 point
  9. Training team hype for Thursday September 14th. I think we'll see what's up the day prior on the 13th. Join the Maxon Training Team for an exciting #AskTheTrainer Special! We already know the exciting topic, but we’ll wait to tell you about it. Just one hint: It’s going to be exciting! https://www.maxon.net/en/event/ask-the-trainer-special-september-14th
    1 point
  10. No matter what the consumer trend is I think the main drive behind it is always the client's needs. If the client wants fluid simulation or fire then the studio will ask for a Houdini specilist and secondarily some other software like Real Flow or Navié Effex and Arnold. If the client wants product placement of complex things like shoes or characters for cinematic shots for games then it's ZBrush and RedShift. If the client want's animation of characters or crowd sims it's Maya. If the client want's architecture it's AutoCAD, 3Ds Studio and Vray. Blender and C4D have specific limitations but also a wide use for preτty much everything. Blender download counters don't mean a thing. I've downloaded it countless times but it's controversial if I will open it twice a year. Like with C4D, the best Blender users are those who have been using it since early versions. Many ask for Blender artist but I don't think they will all find the right person for the job. I think nowadays the 3D visualization industry is just looking for wizards. Like in software engineering they expect to either know a ton of languages and helper technologies with 25y experience or have an asian-level of expertise in one particular language to do everything with it. As long as the job can be done. The difference with 3D is that there will never be a job opening for someone with experience in POVray to maintain a 1997 project. All this conversation brings the question "what about home-brew software ?" What kind of software experience do they seek for newcomers. How much time does Pixar, Disney, WetaFX need to train their stuff for RenderMan, Presto, Hyperion, Meander, Tonic, Manuka, Odin, Tissue...
    1 point
  11. You just wrote down exactly why I made my reply. Uni's train people up for specific niches "because thats what the big studios ask us to do" But why? The big studios represent (pulling a number out my arse here) lets say 10% of the 3d employment options. What are 90% of the 3D employers supposed to do with a legion of riggers and substance painters? they're being trained up for jobs they're not going to get. Imagine if the colleges were pumping out thousands of mechanics who specialised in nothing but wankel engine design. Theres a few dozen jobs for these people and all the other employers are normal garages trying to hire mechanics to fix your uncle's 10 year old honda civic. But all the wankel engineers have never lifted a spanner or changed an air filter in their life.
    1 point
  12. I was going to post something similar, Mash saved me the trouble. The guy that runs Nodefest here in Melbourne - they've had EJ and Tim Clapham doing presentations showing their C4D stuff - used to work for Animal Logic in Sydney. He worked on one of the HAPPY FEET films and told us he was driven to pack up and leave after he'd spent more than six months in a team working on just one shot. The mograph scene in Melbourne is fairly robust and a number of people work down here doing exactly what Mash described, following a brief and doing it from start to finish. The last time I visited a monthly catch up (there's a local mograph group that meets in the same building where my filmmaking buddies catch up) a young woman told me, yep, most of the people here doing mograph stuff are using C4D to do it. There's also a high profile game and FX training school in Melbourne, they use Maya and when I've spoken to grads there at various talks for Houdini (or Maya or whatever), they've usually gone, yeah, I've trained as a rigger, hopefully I can get some rigging work now that I'm trained up in it. Horses for courses. I'm not expecting C4D to be a main tool on the next five Marvel movies, but equally I don't see the mograph folk locally dumping C4D and going all in on Maya to do their Mograph stuff.
    1 point
  13. If I may be so bold. In my experience, a lot of universities running 3D courses seem quite myopically blind what the real world demands are of 3D artists. They look at what the most exciting things are for students, predominantly working in the movie industry and seem to blindly target that as if it will be getting their students a job at the end. The reality is that the vast majority of 3D jobs are for generalists, people who can model, clean up cad, texture, light, animate, render, post produce in AE + PS. For every person spending 2 years working on animating Groot's left nut, theres 100 more working at a less prestigious company creating every day artwork. Product renders, adverts, previz, theatre shows, projection shows, town planning, architects, packaging visualisers.... We recently went to some university recruitment days looking for new staff, and honestly the experience was quite miserable. Maybe 30 stands from employers looking for people and perhaps a couple hundred students handing out resumes and virtually everyone who we spoke to was unsuitable. More or less all of them had chosen one single specific niche, looking for a job at a studio. I need someone who can take a client's idea and run with it from start to finish. Instead all we found were 20 people that want to do nothing but rig characters, 20 texture painters, a dozen modellers and a random scattering of uv layout people, simulation engineers and TDs / scripters. Not one of those people could produce anything from start to finish.
    1 point
  14. Feel free to check out my small Github Repository as well. Splinify is a generator capsule that will create curved or linear splines from edges. For curves you need to switch the capsule to Bezier mode https://github.com/bmarl/Neutron
    1 point
  15. @HappyPolygon Here, "Edge to Spline" Capsule. It's in the scene database if you want to add it to your own database. It's a Spline Primitive that accepts a single mesh object as an input. If you want several object to be considered, just put them under a connect. I've also added selections if you want to control where which polygon outlines are turned into lines. Edge_to_Spline.c4d
    1 point
  16. I'm hoping for a new particles solver with massive performance gains. Pyro has to have some kind of new particles under the hood already, hasn't it? And a new, fresh viewport, please. And an uncluttered dopesheet. Thanks! 🙂
    1 point
  17. Graph to spline 🙂 This one takes the shape of graph UI and makes a spline in viewport Viewport 102_Graph_To_Spline.c4d
    1 point
×
×
  • Create New...