Jump to content

HappyPolygon

Registered Member
  • Posts

    1,898
  • Joined

  • Last visited

  • Days Won

    94

Everything posted by HappyPolygon

  1. Me waiting for the next release and promo videos
  2. A team of researchers from ByteDance has recently introduced a brand-new AI model capable of generating 3D models from nothing but a text description. Meet MVDream, a multi-view diffusion model that is able to generate geometrically consistent multi-view images from a given text prompt. https://mv-dream.github.io/
  3. It's not, recently a member of Maxon I spoke to referred to it as experimental and under improvement. Let's hope C4D 2024 will give more clues as to where this is going.
  4. I've added it to the suggestions file. It's more probable it will implemented for RS specifically rather than C4D...
  5. For me the simplest/fastest way is this. I've never used Cineware though so I don't know if it really helps you in AE hp.c4d
  6. Don't we already have this technology for decades ? All 3D game engines do that. I thought apps like C4D also do that for viewport performance. Are yu talking for an offline implementation of the mechanism ?
  7. If it's the framerate from the animation you can change the framerate from here :
  8. Yes ! those plants I bet are responsible. If instancing doesn't work assign a Display tag and make them cubes or skeletons. I've had many issues like this with plants in the past
  9. Does the framerate concern viewport navigation or viewport animation ? Try disabling some of these: Usualy Shadows and Transparency are the biggest slowdowns
  10. Nuke 15.0 Nuke 15.0 features a number of updates to key pipeline technologies, including support for the current CY2023 VFX Reference Platform specification, and for USD 23.05. The release also introduces experimental support for OpenAssetIO, the Foundry-developed open standard for exchange of data between DCC and asset-management software, which was adopted by the Academy Software Foundation last year. The documentation for the beta describes the initial implementation as a “very basic tech preview” introduced to let studios begin testing OpenAssetIO in their pipelines. In addition, Nuke 15.0 introduces native support for the Apple Silicon processors in current Macs. The change will make Nuke the latest key application in VFX pipelines to support M1 and M2 processors natively, Autodesk now having introduced support in Maya and Arnold, and SideFX having introduced support in Houdini. The 15.0 releases also extend the USD-based 3D compositing system introduced in Nuke 14.0 last year. UI and workflow updates include a dedicated 3D toolbar in the viewer, and two-tier selections – for example, to select faces within an object – for more precise control. There are also accompanying updates to the Scanline Render system and to the GeoMerge node, used to merge stacks of objects into a single scene graph. The latter gets five new merge modes, intended to give users more control over how data is merged when using the new 3D system. The underlying USD implementation now supports the USD Python bindings, making it possible to manipulate USD data directly through Python; and gets structural changes intended to improve performance, to make it easier to inspect and filter complex scenes, and to provide greater user control in future export workflows. There are also further updates to AIR, Nuke’s machine learning framework, intended to enable users to train their own neural networks to automate repetive tasks like roto. Training times when using the CopyCat node have been reduced by “up to 50%”, with key changes including the option to distribute training across multiple machines. Nuke Studio users also get support for AIR’s Inference node as a soft effect in the editorial timeline. The update also makes more more complex Blink kernels available as soft effects, including Denoise, LensDistortion and blur effects. Support for the NDI network protocol has also been extended, with support for audio when using NDI in Monitor Out. Nuke 15.0 is currently in beta. It is compatible with Windows 10+, Rocky Linux 9 and macOS 12.0+, and supports Apple Silicon processors natively. Nuke 14.1 is also currently in beta, and is compatible with Windows 10+, CentOS 7.4-7.6 Linux and macOS 12.0+. It supports Apple Silicon using Rosetta emulation. To access either beta, you will need a Nuke, NukeX or Nuke Studio license with current maintenance. The beta period is expected to last until the end of the year. https://www.cgchannel.com/2022/12/foundry-releases-nuke-14-0/ https://www.foundry.com/products/nuke/beta/15.0_14.1 Radical Live The first major change to Radical’s services since Autodesk invested in the firm last year, Radical Live makes it possible to extract motion from video in real time as well as offline. Users can shoot video footage of an actor – the service works with a “consumer-grade camera”, and supports simultaneous capture of multiple actors – and stream it to the cloud, where it is processed on Radical’s servers. The resulting motion data can be previewed on standard desktop or mobile systems, or streamed into Blender, Maya, Unity or Unreal Engine in JSON format. Users can record live data to the software’s timeline, with the option to retarget the motion to a custom 3D character in Maya or Unreal Engine. Retargeting is “coming soon” to the Blender and Unity plugins. Radical has also simplified its pricing, with both Live and the original Core service available through a single $96/year Professional subscription, which provides 36 hours of play time per year, usable across either service. Free subscriptions provide 24 hours of play time per year, but you can only preview the motion: you don’t get FBX downloads or the DCC integration plugins for Radical Live. Radical Live can be used on any device that “can stream video and run a web browser” – Radical specifies the Chrome browser – including Windows, Linux and macOS systems, and iOS and Android mobiles. Integration plugins are available for Blender 2.83+, Maya 2022, Unity 20219-2022 and Unreal Engine 5.1+, with a plugin for Omniverse “coming soon”. https://radicalmotion.com/products/live https://www.youtube.com/@radicalmotion2207 Luxion buys Digizuite Luxion has completed its acquisition of digital asset management provider Digizuite. The financial terms of the deal have not been disclosed. Luxion will now “[integrate] cutting-edge DAM capabilities into its portfolio”, which includes the KeyShot renderer, widely used for product design and visualization. Luxion and Digizuite will now integrate operations to bring users “enhanced products and services … to solve our customers’ biggest content management challenges”. The announcement doesn’t go into detail on how, or whether, Digizuite’s technology will be integrated with KeyShot, but in July, when the buyout was first announced, Thorsgaard said that the firm aimed to provide “a full-service design solution, from ideation to asset management”. He commented that designers are “juggling dozens of tools” and facing “increasing production demands”, and that Luxion aimed to enable them to “focus more of their energy into their creative work”. At the time of the original announcement, Digizuite said that the acquisition would not result in any changes to customers’ existing “relationships or agreements”. KeyShot is available for Windows 10+ and macOS 11.7+. Integration plugins are available for a range of DCC and CAD tools, including 3ds Max, Blender, Cinema 4D and Maya. The software is available rental-only. KeyShot Pro subscriptions cost $1,188/year https://blog.keyshot.com/luxion-announces-offer-to-purchase-digizuite Cinebench 2024 Maxon, developers of professional software solutions for editors, filmmakers, motion designers, visual effects artists and creators of all types, is thrilled to announce the highly anticipated release of Cinebench 2024. This latest iteration of the industry-standard benchmarking software, which has been a cornerstone in computer performance evaluation for two decades, sets a new standard for performance evaluation, embracing cutting-edge technology to provide artists, designers, and creators with a more accurate and relevant representation of their hardware capabilities. Cinebench 2024 ushers in a new era by embracing the power of Redshift, Cinema 4D's default rendering engine. Unlike its predecessors, which utilized Cinema 4D's standard renderer, Cinebench 2024 utilizes the same render algorithms across both CPU and GPU implementations. This leap to the Redshift engine ensures that performance testing aligns seamlessly with the demands of modern creative workflows, delivering accurate and consistent results. Cinebench 2024 reinstates GPU benchmarking, a feature absent from Cinebench for the past decade. The latest version not only evaluates CPU performance; it also provides insights into the GPU's capabilities, reflecting the evolving technological landscape of creative software and workflows. Cinebench 2024 is designed to accommodate a broader range of hardware configurations. It seamlessly supports x86/64 architecture (Intel/AMD) on Windows and macOS, as well as Arm64 architecture to extend its reach to Apple silicon on macOS and Snapdragon® compute silicon on Windows, ensuring compatibility with the latest advancements in hardware technology. Redshift GPU performance can be evaluated on systems with compatible Nvidia, AMD and Apple graphics processors. Cinebench 2024 streamlines the benchmarking process by utilizing a consistent scene file for both CPU and GPU testing. This innovation enables users to discern the advantages of leveraging Redshift GPU, providing a real-world glimpse into the benefits of harnessing cutting-edge graphics hardware for rendering tasks. Cinebench 2024 introduces a revamped user interface that enhances the user experience and showcases the incredible artistic endeavors achieved with the Redshift render engine in Cinema 4D. This dynamic interface serves as a testament to the potential of the Redshift engine while offering users a more intuitive and visually engaging experience. Beyond the surface, Cinebench 2024 brings forth a host of performance-enhancing features. With a threefold increase in memory footprint compared to Cinebench R23, the software caters to the memory-intensive demands of modern projects. Moreover, a six-fold rise in computational effort and utilization of newer instruction sets ensures a benchmark that resonates with the complexity and sophistication of contemporary creative projects. It's crucial to note that Cinebench 2024 scores cannot be directly compared to those of its predecessor, Cinebench R23. With the incorporation of Redshift, a different rendering engine, larger memory footprint, and more complex scenes, Cinebench 2024 offers a distinctly enhanced and accurate evaluation of modern hardware capabilities. Cinebench 2024 is immediately available for download starting today on the official Maxon website. Creatives, gamers and technology enthusiasts are invited to explore this groundbreaking benchmarking tool and experience firsthand the leap in performance evaluation that it offers. https://www.maxon.net/tech-info-cinebench https://www.maxon.net/en/downloads/cinebench-2024-downloads Unreal Engine 5.3 One of the biggest changes in Unreal Engine 5.3 is the option to import simulations from DCC applications like Houdini in OpenVDB format and render them in Unreal Engine. Unlike third-party plugins for importing OpenVDB files, which convert the imported data to NanoVDB, Unreal Engine converts the data into Sparse Volume Textures (SVTs). Like UE4’s Volume Textures, SVTs can be indexed with 3D UV co-ordinates, but use “much less” memory. SVTs are supported in the Deferred Renderer as Heterogeneous Volumes for smoke and fire, or as Volumetric Cloud or Volumetric Fog. “More complete support” is available when rendering volumes with the Path Tracer, which can simulate scattering, self-shadowing and GI. Another major new feature in Unreal Engine 5.3 is the Skeletal Editor, intended for rigging characters or editing skin weights directly inside Unreal Editor, rather than having to do so in DCC applications. Users can convert Static Meshes to Skeletal Meshes, add or edit bones, and paint skin weights, with support for standard painting options like Flood, Relax and Normalize. Animators also get updates to the Curve Editor and Smart Bake system, and workflow improvements for animation retargeting. Unreal Engine 5.3 also includes a new Panel Cloth Editor, with support for XPBD (Extended Position Based Dynamics) constraints as well as the existing PBD constraints. Simulation metrics are no longer baked into the draped pose, so simulation quality should be improved even when using PBD constraints. The new editor uses a non-destructive workflow, replacing masks with reusable weight maps, and supports cloth-flesh interactions and Level Set Volume (LSV) collisions. According to Epic, it lays the foundation for both real-time cloth editing, and a “more VFX-oriented” approach based around simulation caching, enabling users to trade simulation time against accuracy. Architects and visualization artists get proper support for orthographic rendering. Unreal Engine has had an Orthographic mode for cameras for some time, but Epic describes it as having been “impractical to use” due to the number of rendering features not supported. In Unreal Engine 5.3, it supports “most modern features”, including dynamic GI system Lumen, Nanite virtualised geometry, shadows, and Temporal Super Resolution (TSR) render upscaling. Artists using Unreal Engine for previs or virtual production work get a number of new features, including a new Anamorphic Lens Model and Anamorphic Lens Solver. The changes should make it possible to render the CG elements of a shot in real time with distortion matching that of live-action footage shot using anamorphic lenses. Other key changes include initial support for Unreal Engine’s virtual camera system on macOS, and CineCamera Rig Rail, for setting up rail camera shots inside Unreal Engine. Unreal Engine 5.3 also features further updates to virtualised geometry system Nanite, dynamic lighting system Lumen, the in-engine modelling and UV tools, and to the other simulation toolsets. Nanite gets support for explicit per-vertex tangents, which should improve results with low-poly models, and can now be enabled in Landscape Actors to boost runtime performance for large landscape assets. Lumen now supports more than one bounce when rendering reflections using hardware ray tracing, and Lumen Reflections can be used without Lumen GI, to improve the visual quality of levels with static lighting. The in-engine modelling toolset gets a reworked UI and new vertex paint tools, while the UV editor can now visualise texture distortion via heatmaps. Niagara, Unreal Engine’s particle effects system, gets support for transforming fluid simulations after they are cached, making it possible to duplicate and offset caches to create more complex effects. The hair grooming toolset now supports streaming of groom and groom binding assets, and there is initial support for hair strands in the deformer graph. The new Procedural Content Generation toolset introduced in Unreal Engine 5.2 gets support for hierarchical workflows, making it possible to use multiple grid sizes when generating game worlds. In addition, support for the USD and MaterialX standards is being extended further, with the option to edit USD materials directly in the viewport and to import USD files with MaterialX shading networks. Other key pipeline integration changes include the use of the OpenColorIO (OCIO) standard for internal texture conversions. You can find a complete list of changes via the links below, including new features for developers like Multi-Process Cook, which makes better use of available CPU cores and memory when using a build farm to convert content to a platform-specific format. Unreal Engine 5.3 is available now for 64-bit Windows, macOS and Linux. Use of the editor is free, as is rendering non-interactive content. For games developed with the engine, Epic takes 5% of the gross royalties after the first $1 million generated. https://www.unrealengine.com/en-US/blog/unreal-engine-5-3-is-now-available https://docs.unrealengine.com/5.3/en-US/unreal-engine-5.3-release-notes/ Substance 3D Modeler 1.4 The biggest new feature in Substance 3D Modeler 1.4 is the reference image system. It enables users to import images while working in desktop mode, then view them in the background while sculpting in virtual reality – so far, the images are only visible in VR. The images are stored in save files, along with their positions. In addition, there are two interesting new experimental features: Headless VR mode and ambient occlusion. The first is a new sculpting mode that makes it possible to use VR controllers while working on desktop. Adobe describes it as combining “the best of desktop and VR”, but also notes that “your mileage may vary” according to the hardware you’re using. The second is ambient occlusion, to give a greater sense of depth to models. It’s currently only supported for the virtual clay, but it can be enabled in Model mode, making it possible to use while sculpting, unlike the new viewport ray tracing system added in the previous release. Both are disabled by default, but can be toggled on from the Preferences panel, along with a Live boolean preview option, which calculates Boolean operations in real time. The software’s interface has also been overhauled, rationalising the top and bottom bars. The biggest change is the new Environment panel accessible from the top bar, which can be used to adjust the position, intensity and color of Modeler’s three-point lighting. It’s also possible to toggle the key, fill and rim lights individually from the panel, or choose whether they cast shadows. The Warp tool gets five new falloff curve settings. The image above shows the range of forms that can be sculpted when using them with sphere, cube and capsule brushes. In addition, multitouch is supported on Wacom graphics tablets and pen displays, making it possible to change the camera view using gestural controls. You can find a full list of changes via the link at the foot of the story. Substance 3D Modeler 1.4 is compatible with Windows 10+. It is available via Substance 3D Collection subscriptions, which cost $49.99/month or $549.88/year. Perpetual licences are available via Steam, and cost $149.99. https://helpx.adobe.com/substance-3d-modeler/release-notes/v1-4-release-notes.html https://helpx.adobe.com/content/dam/substance-3d-modeler/release-1-4/ReferenceImages.mp4 Substance 3D Sampler 4.2 Key changes in Substance 3D Sampler 4.2 include a new version of Image to Material, its AI-trained system for generating PBR texture maps from a single source image. You can read more about the toolset, which can generate base color, roughness, metallic, normal and displacement maps from photos, in this story, from back when the software was still called Substance Alchemist. The new version has been “trained on all material types”, with users now able to choose specific algorithms matching a range of common materials. Those shown in the video above include asphalt, ceramics, leather, metal, paint, paper, plastic and rubber, and stone, plus general categories for organic and ground materials. According to Adobe, the new algorithms give better results with many materials, with the release notes specifically namechecking fabric, plaster and wood. The update also adds an entirely new AI-trained feature, the Upscale layer. As its name suggests, it upscales textures – it works with the base color, normal, height, roughness, and metallic channels of a material – with the AI filling in missing details. It’s possible to increase the resolution of a texture by 2x or 4x: there doesn’t currently seem to be an option to achieve a specific target resolution. The update also adds a new preference to enable or disable GPU-accelerated neural networks, presumably to control performance when working with the new AI features. Other changes in Substance 3D Sampler 4.2 include a new layer resolution system, with layers taking the resolution of the document itself, or from the layer below. The UI displays the dimensions of each layer in pixels, to help visualize the impact of edits on the resolution of a material. In addition, the Crop filter now supports dynamic output resolution, and the Delighter filter – which removes baked-in shadows and lighting from source photos – has been “vastly improved”, although the release notes don’t specify what has changed. Substance 3D Sampler 4.2 is available for Windows 10+, CentOS 7.0+/Ubuntu 20.04+ Linux and macOS 11.0+. New perpetual licences, available via Steam, cost $149.99. The Windows and macOS editions are also available via Adobe’s Substance 3D Texturing subscriptions, for $19.99/month or $219.88/year; or Substance 3D Collection subscriptions, for $49.99/month or $549.88/year. https://helpx.adobe.com/substance-3d-sampler/release-notes/version-4-2.html Vantage 2.1 Vantage 2.1 adds support for DLSS 3.5, the latest version of Deep Learning Super Sampling, NVIDIA’s AI-based image reconstruction technology. DLSS began as an image upscaling system, enabling applications in which it was integrated to increase viewport interactivity by rendering each frame at a lower resolution, then upscaling the image to viewport dimensions, but now also supports frame generation, further increasing frame rates by generating intermediate frames. To that, DLSS 3.5 adds Ray Reconstruction, a new AI-trained render denoising system intended to supersede traditional denoisers like NVIDIA’s own OptiX AI for “intensive ray-traced scenes”. Although DLSS 3.5 is due to be integrated in architectural renderer D5 Render and NVIDIA’s own Omniverse, Chaos describes Vantage as the “first non-gaming application” to support it. Ray Reconstruction makes panning the camera “smoother and faster” – in the video at the top of the story, the viewport render becomes much more stable when navigating the scene. However, reflections become blurrier, so users can switch to one of the other denoisers supported in Vantage, including OptiX AI, to improve image quality when the camera is static. DLSS Ray Reconstruction is available on compatible NVIDIA RTX GPUs: those from 2018’s GeForce RTX 20 series and newer. Other new features in Vantage 2.1 include support for refraction glossiness, improving the accuracy with which materials like frosted glass can be recreated. The release also introduces experimental support for Intel Arc GPUs. The change makes Vantage compatible with discrete GPUs from all of the major manufacturers, Vantage 2.0 having added support for AMD cards. Chaos Vantage is compatible with Windows 10+ and DXR-compatible AMD, Intel or Nvidia GPUs. The software is rental-only, with subscriptions costing $108.90/month or $658.80/year. It is included free with V-Ray Premium and Enterprise subscriptions. https://docs.chaos.com/display/LAV/Chaos+Vantage%2C+v2.1.0 Affinity Photo 2.2 Serif has released Affinity Photo 2.2, the new version of its image editing and digital painting app. The update, which is free to existing users, adds support for color-management standard OCIO 2, and makes a number of workflow improvements. The update was released alongside version 2.2 of Serif’s other Affinity products, vector design software Affinity Designer 2.2 and page layout tool Affinity Publisher 2.2. Key changes in Affinity Photo 2.2 include support for OCIO 2 (OpenColorIO 2), the latest major version of the color-management standard used in movie VFX and animation pipelines. The update also introduces extra keyboard shortcuts for pixel brushes and long-press shortcuts for tools more generally. Other workflow improvements include the option to change the color of guides and new dialogs for more precise control when using the Move or shape creation tools. The iPad edition gets preferences previously only available in the desktop edition. Affinity Photo 2.2 is available for Windows 10+, macOS 10.15+ and iPadOS 15.0+. The update is free to registered users of version 2.x. New perpetual licences of the desktop edition cost $69.99; the iPad edition costs $18.49. A Universal License, which includes all of the Affinity apps, costs $164.99. https://forum.affinity.serif.com/index.php?/topic/191643-faq-22-release-notes-improvements-and-major-fixes/ Substance 3D Connector he new Connector system will extend the existing Send To functionality in the Substance 3D applications, which makes it possible to transfer data between them directly, without having to export then re-import a file. The demo from the livestream shows both a material and a 3D model being transferred directly from 3D capture and material-authoring tool Substance 3D Sampler to Blender, with the material properties remaining editable inside the open-source 3D software. Only transfer from Sampler to Blender was shown, but Adobe says that it aims to make the system a two-way live link between the Substance 3D apps and third-party tools. Adobe says that it is currently doing internal tests with Blender, 3ds Max and Maya, but that it aims to support more DCC applications with the connector in future. The slide above from the livestream shows Cinema 4D, Houdini, Katana and Omniverse. As well as Substance 3D Sampler, the slide shows the logos of 3D texture painting app Substance 3D Painter and virtual reality modeling tool Substance 3D Modeler. Adobe aims to make the underlying library for the Connector and sample plugins available open-source “early next year”. It hasn’t announced an exact release date yet. You can find prices and system requirements for Substance 3D applications in our stories on the current releases of Substance 3D Designer, Modeler, Painter, Sampler and Stager. https://www.adobe.com/creativecloud/3d-ar.html After Effects Beta 2023 Adobe has been overhauling 3D compositing workflow in After Effects for some time now, beginning by introducing new 3D gizmos and camera controls in 2020. Last year, the firm released a beta build that made it possible to import 3D models in OBJ, glTF and GLB format directly into the software. Although that functionality remains in beta, the beta has now been expanded. The key feature of the new “true 3D workspace” is the Advanced 3D renderer, a new hardware-agnostic GPU-based render engine. Also referred to in the online documentation as the Mercury 3D renderer, it is a final-quality renderer, unlike the existing 3D Draft preview mode. As well as 3D models, it can render other 3D layers such as extruded text, and 2.5D plane layers; and supports PBR materials via the Adobe Standard Material. According to Adobe, it should let motion graphics and visualization artists to render less demanding shots directly inside After Effects without having to round-trip shots to DCC applications like Cinema 4D, a version of which is included with After Effects. Also supports image-based lighting and stylized effects The Advanced 3D renderer brings with other new features of the true 3D workspace: image-based lighting for photorealistic work, and 2D/3D workflows for stylized work. The former makes it possible to light objects in the workspace with a HDRI environment map, imported in .hdr format. The latter makes it possible to use a rendered frame from a 3D model layer as a source, in order to apply 2D effects to portions of 3D scenes. According to Adobe, users can create “highly stylized renders using effects that reference another layer, such as Displacement Map, Vector Blur [and] Calculations”. However, the renderer is a work in progress, and currently has several known limitations, including lack of support for DoF and motion blur, and issues with shadows and lighting. The new beta also features an improved version of the Roto Brush, After Effects’ rotoscoping tool. It enables users to create roto masks by roughly painting out the part of the frame to be isolated, with the software automatically generating the full mask and propagating it through the other frames of the video being rotoscoped. On paper, the new version isn’t as major an update as 2020’s Roto Brush 2.0, which applied machine learning to generate masks faster and more accurately. However, Adobe still describes it as a “major advance”, improving speed, quality of the masks generated, and stability, particularly in shots where the object being masked is occluded by foreground objects. The true 3D workspace and improved roto brush are available via a separate beta build of After Effects. Adobe hasn’t announced a final release date for them. The current stable release, After Effects 23.6, is available for Windows 10+ and macOS 11.0+ on a rental-only basis. Subscriptions to After Effects cost $31.49/month or $239.88/year, while All Apps subscriptions, which provide access to over 20 of Adobe’s creative tools, cost $82.49/month or $599.88/year. https://blog.adobe.com/en/publish/2023/09/13/new-ai-3d-features-at-ibc-2023 Photoshop 25.0 Originally released in beta in July, Photoshop 25.0 introduces a second new tool based on Firefly, Adobe’s generative AI toolset. Adobe began integrating Firefly into Photoshop in May, with Generative Fill. It enables users to select a region of an image to replace, then enter a text description of the content they would like to generate in its place, with Photoshop creating the AI-generated content as a separate layer. Generative Expand is similar, but it’s an outpainting rather than an inpainting tool, so rather than replacing a region of image, it extends an existing image’s borders. Users can extend the canvas using the existing Crop tool, then type in a text description of the new background they would like to generate. Again, the AI content is generated non-destructively in a new layer. Both Generative Expand and Generative Fill are now supported with actions in Photoshop’s contextual task bar to streamline workflow. Although the tools were only licensed for non-commercial use during the beta period, both can now be used on commercial projects, Adobe having just introduced credit-based pricing for Firefly. Creative Cloud subscriptions include a monthly quota of ‘fast’ generative AI credits, with one credit corresponding to a Generative Fill or Generative Expand operation on a 2,000 x 2,000px image. Photography plans get 100-250 credits/month, single app subscriptions to Photoshop get 500 credits/month, and All Apps subscriptions get 1,000 credits/month. Once they are used up, users can continue to use Firefly for free, but at slower speeds. From November 2023, it will also be possible to buy extra fast credits through a separate new subscription plan, with prices starting at $4.99/month for 100 Credits. Other changes in Photoshop 25.0 include an update to the Remove tool. Introduced in Photoshop 24.5, the tool, which is also AI-trained, makes it possible to do quick object removals by roughly painting over the object to be removed, with Photoshop automatically generating a new section of the background to fill the gap. In Photoshop 25.0, as well as painting over an object, you can draw a ring around it. According to Adobe, the new workflow makes it easier to replace large areas of an image, since it reduces the chances of missing out parts of the area to be replaced. In addition, Adobe has discontinued Photoshop’s Preset Sync feature, which made it possible to sync presets for Brushes, Gradients, Swatches, Styles, Shapes, and Patterns with a Creative Cloud account. That meant that presets could be transferred to a new device – for example, from a laptop to a desktop machine – when logging in to it with the same Adobe ID. The change follows Adobe’s announcement that it is discontinuing Creative Cloud Synced files across all Creative Cloud subscriptions, beginning on 1 February 2024. Adobe has also updated Photoshop on the web, the browser-based edition of the software, available as part of Creative Cloud Photoshop subscriptions. Like the desktop edition, it includes the new Generative Fill and Generative Expand tools. Other changes in the September 2023 release include a more streamlined user inteface, one-click quick actions for common tasks like background removal, and the option to invite other people to edit cloud documents. Finally, Adobe has released Photoshop on the iPad 5.0, the latest version of the tablet edition of the software, also available to Photoshop subscribers. The update makes it possible to switch between variants of an image created using Generative Fill in the desktop edition of Photoshop via the Variations panel. Other changes include the option to import and open photos from Lightroom, Adobe’s photo editing software, and updates to the color picker and color swatches. Photoshop 25.0 is available for Windows 10+ and macOS 11.0+. In the online documentation, it is also referred to as the September 2023 release or Photoshop 2024. The software is rental-only, with Photography subscription plans, which include access to Photoshop and Lightroom, starting at $119.88/year. Single-app Photoshop subscriptions cost $31.49/month or $239.88/year. https://helpx.adobe.com/photoshop/using/whats-new/2024.html https://helpx.adobe.com/photoshop/using/generative-ai-faqs-photoshop.html https://helpx.adobe.com/photoshop/using/whats-new/mobile-2024.html Twinmotion 2023.2 Created by visualisation studio KA-RA, Twinmotion was desgned to help architects with limited 3D experience to create still or animated visualisations of buildings. It imports hero models in a range of standard 3D file formats, or via live links to CAD applications. Users can then create background environments from a library of stock assets, and assign lights. Atmospheric properties – including clouds, rain and snow, and ambient lighting based on geographical location and time of day – can be adjusted via slider-based controls. The software was acquired by Epic in 2019, and initially made available for free, before being re-released commercially in 2020 with an aggressive new price point. Twimotion 2023.2 updates the software to the same foundation as Unreal Engine 5.3, the latest version of the game engine and real-time renderer. The updated engine has a number of new features, the major one being support for Lumen, Unreal Engine’s real-time global illumination (GI) system. The software now has two real-time rendering modes: Standard, which uses a volumetric approach to ambient lighting, and Lumen, which is a surface-based approach, using ray tracing to calculate one light bounce and one reflection bounce. Lumen generates better-quality renders, but uses more CPU/GPU resources and RAM. It provides a middle option between the old real-time rendering and the Path Tracer, which provides even greater render quality, but uses even more resources. In the initial release, Lumen is not available in VR mode, or on the cycloramas or LED walls used in virtual production, although the latter should be supported “soon”. Twinmotion 2023.2 also moves the software to real-world light values for sun intensity. As a result the software’s auto-exposure algorithm has been adjusted to work with a larger range of exposure values, with the total range increased from 4 stops to 26 stops. In addition, new Local Exposure settings have been added to help preserve shadow and highlight details in scenes with high dynamic ranges. Other changes include a new Basic glass material, intended as a faster-rendering alternative to the more fully featured Standard and Colored glass materials for scenes that contain a lot of clear glass. In addition, assets from Sketchfab, the Epic-Games-owned online model library, can now be imported with the model hierarchy intact. Epic Games has also replaced the old non-commercial free trial of Twinmotion, which capped exports at 2K resolution, but which was otherwise fully featured, with a new Community Edition. It’s still free, but as well as limiting resolution, it watermarks output, and lacks access to online collaboration system Twinmotion Cloud. Twinmotion 2023.2 Preview 1 is only available in the Community Edition, which Epic Games says will now be the policy for all future preview releases. Twinmotion 2023.2 is available as a public preview for Windows 10+ and macOS 12.0+. Integration plugins are available for CAD and DCC apps including 3ds Max 2016+, SketchUp Pro 2019+ and Unreal Engine 4.27+. Epic hasn’t announced a final release date. New licences cost $499. The software is free for students and educators, and there is also a free Community Edition of the software which caps export resolution at 2K, watermarks output, and lacks access to Twinmotion Cloud. https://www.twinmotion.com/en-US/docs/2023.1/twinmotion-release-notes/twinmotion-2023-2-PR1-release-notes Redshift 3.5.19 Redshift 3.5.19 extends the new MatCap Shader node added in Redshift 3.5.17. Described as “laying the foundation for non-photo-real materials in Redshift”, it creates stylised materials by mapping an image onto a mesh relative to the render camera. The latest updates adds new parameters to control the projection, including scale, rotation and whether camera or world space is used. The update also improves rendering performance on multi-core CPUs, with render buckets now running concurrently on CPUs with 12 or more threads (6 or more cores). Redshift CPU also now supports AVX2 and Embree 4.1, the last-but-one release of Intel’s CPU ray tracing library. Cinema 4D users get experimental support for automatically convering Cloner objects with instances of meshes to point clouds. The Distorter shader now supports 3D distortion using Maxon noise, making it possible to distort bump maps as well as color textures. Houdini users get support for asset picking in the Solaris viewport and the option to render packed Alembic primitives as Redshift Alembic procedurals. Blender users get “basic” support for the Jitter Node added to the Redshift core in the previous release. Redshift 3.5.19 is available for Windows 10, glibc 2.17+ Linux and macOS 12.6+. The renderer’s integration plugins are compatible with 3ds Max 2018+, Blender 2.83+, Cinema 4D R21+, Houdini 17.5+ (18.0+ on macOS), Katana 4.0v1+ and Maya 2018+. The software is rental-only, with subscriptions costing $45/month or $264/year. Blender 4.0 First introduced in Blender 2.92 in 2021, Geometry Nodes is a node-based framework for procedural modeling, object scattering – and, most recently, simple simulations. The Node Tools project lets users package some of that functionality into more conventional tools, turning groups of nodes into operators that can be accessed from Blender’s menus. By default, the new tools will appear in a new menu for unassigned operators, but it will be possible to add them to existing menus, and to set which modes they are available in. Node-based tools are intended to be a more artist-friendly way to access and control node groups than the existing Geometry Nodes Modifer, as discussed in this post on the Blender Developer blog. The video at the top of the story, posted by developer Jacques Lucke on X (formerly Twitter), shows basic examples: creating simple rotate and extrude tools. However, it should be possible to create more complex operators: the ultimate goal of the project is to make it possible to recreate any existing Edit Mode operator with nodes. Blender 4.0 is due for a stable release in November 2023. The current stable release, Blender 3.6 is available for Windows 8.1+, macOS 10.15+ (macOS 11.0 on Apple Silicon Macs) and glibc 2.28+ Linux, including Ubuntu 18.10+ and RHEL 8.0+ and CentOS and Rocky Linux equivalents. https://projects.blender.org/blender/blender/issues/101778 Shōgun 1.10 First released in 2017, Shōgun is intended for calibrating Vicon’s optical motion-capture systems, and for streaming, solving and cleaning data captured from them. It has two key components: Shōgun Live, used for capturing data, and Shōgun Post, for cleaning and exporting it. Key changes in Shōgun Live 1.10 include a new object alignment tool for aligning props. It adds new options for updating the orientation of the object within a scene, with users able to align the prop’s axes to global axes, a single marker or the center of a group of markers. There are also “significant improvements” to finger tracking accuracy and the stability of calibration. Shōgun Post 1.10 gets better integration with MetaHuman, Epic Games’ framework for creating next-gen real-time characters. Setup scripts included with the installation automate the process of retargeting data to MetaHuman skeletons in Unreal Engine 5, including setting up the hierarchy, specifiying degrees of freedom, and posing the target skeleton. It is also now possible to reprocess data from .mcp files, which store data processed in real-time: for example, to resolve frames dropped during the original processing operation. Finally, there are a number of UI changes, described as the first of a “series of iterative improvements” to the software’s interface. Shōgun 1.10 is compatible with Windows 10 and 11. The software is usually bought as part of a complete Vicon motion-capture system, which are priced on enquiry. https://docs.vicon.com/display/Shogun110/New+features+in+Vicon+Shogun+1.10
  11. Some of them yes but there are still some worth having. If I were you I'd wait till the announcement of the new C4D. They might present something you currently working on or something that can help you with that. It would be a waste of time if they present a generator based on exactly what you've been working on all summer.
  12. Is it better than the remesher ? I've seen it being used only in connecting points. Which means you could implement it to alter a topology for artist purposes. Your example image gave me the idea of using it to eliminate overlapping polygons to make a better topology. Tool: MoGenerator Modes: Object Mesher Parameters: Object Reference List Self Check - Will check for overlaps on the objects themselves first. N-Gons (checkbox) - Will create N-gons from spline boundaries. Vertex Topology Parameters: Vertex Selection - Vertex Map Tag Density (positive integer) - a factor that multiplies the number of generated points on the intensity of the vertex map. Mechanism: In Object Mesher Mode the hierarchy of objects in the reference list matters as it defines the priority of what polygons should be kept or deleted. Objects in the list get compared in pairs to find overlapping polygons. The polygons of the lower object in the list are deleted but the points remain. Then the points are connected as polygons to the object upper in the list. In Vertex Topology points are created on the existing geometry (child of the generator) using a vertex tag as a way to create more or less points. This feature is already present in Voronoi Fracture but it doesn't use a Delaunary algorithm. It will be even better if you can generate points from materials and use a Dithering algorithm to generate them. The Object Mesher Mode also works with Splines. This is the most useful part. Any number of any type of splines (open or closed) can be used to create geometry. In this case extra points will be created on intersecting lines. I haven't thought exactly how this will work but the result will be polygons having borders from the intersections of the splines. (also hard to implement for 3D geometry). In the above example the left image doesn't depict only one spline but a bunch of overlapping and intersecting ones. Using the new tool the image would also depict the polygonal topology of the new meshed object using N-gons. The right image is something that looks like the topology of the new object but using the Delaunary topology using the intermediate points of the splines. Disadvantages: It will be hard to implement for 3D geometry. It's more convenient for 2D objects like the ones in your example image. I guess you could guide the algorithm like the Spline Mask does. The Voronoi Fracture could make use for the Delaunary algorithm as a new fracturing type among others I've already suggested to MAXON. To be honest Delaunary doesn't sell much these days. I be more interested in some new procedural shaders, Or someone updating the Curious Animal Effectors. I don't think this is possible. Even in UE and City Skylines making procedural roads branching out requires a set of pretextured and UV mapped sections.
  13. Now that I think of it I might not need this... We have the Mesh Deformer which is the same thing but with polygonal objects on which I have access to points.
  14. Does anyone remember how to manipulate the points of an FFD ? I asked that in RocketLasso some years ago and Chris tackled it quite easy because he had the same question himself. But I forgot how he did it and in what season/episode that was. I think it involved XPresso.
  15. I get too much advertisement for this product on my FB wall. Is it worth it ? Are there any other competitive manufacturers/products with the better quality and same or less price ? https://www.storexppen.eu/topic/deals.html I've been thinking about getting a stylus for drawing splines and sculpting for a long time.
  16. If I ever had the obligation to use particles for rain and had the knowledge to make the splashes on the surfaces then I'd do only the splashes and add the rain in composition. I don't think anyone would notice as long as the post effect rain is consistent with the camera angle and movement.
  17. D5 FREE FAST EASY but you need the latest drivers and of course NVidia
  18. From the album: Personal Projects & Experiments

    I spent too much time on this... I had to create two different TParticles and connect each group seperately to give a look of an interconnected web sytructure.
  19. HappyPolygon

    speck 2

    From the album: Personal Projects & Experiments

    Cheen looks good for microscopic forms.
×
×
  • Create New...