Jump to content

HappyPolygon

Registered Member
  • Posts

    1,898
  • Joined

  • Last visited

  • Days Won

    94

Everything posted by HappyPolygon

  1. There's a persistence from Russia flooding the forum with no-doubt malicious content. I believe the forum has been hacked because Visitors can't post (as far as I know). Also there's no reaction button to at least flag it with as Dislike or Troll. I'm constantly reporting these posts and they get deleted but I don't feel safe. How are we sure our e-mails or other sensitive data have not been collected ?
  2. I wad thinking about making a topic about this for weeks asking from users that use Firefly to share thoughts on copyright issues and experience using it until today when I stumbled upon this article: https://www.creativebloq.com/news/adobe-firefly-ai-legal-fees?utm_source=facebook.com&utm_content=computer-arts&utm_campaign=socialflow&utm_medium=social&fbclid=IwAR3f7vRA30r0U2WfCmjP3-DFgL5o6Imeey94o0roWYK_P01jjf0T_uGmeEg It seems Adobe is first on going commercial with its AI text-to-image generator with no legal concerns. Adobe is feeling confident about the training data as they were on creative commons licenses of its own libraries thus dodging any copyright infringements. But what about selling a computer generated image that had no significant human intervention ? Isn't this still of legal dispute ? And how would a professional cost such work ? Has anyone of you used Firefly ? What do you think of it ? Were there any limitations on generated content per day ? Does it work off-line ? How much would you pay to use an AI service ?
  3. PLA a single spline and use it in a Cloner using a Time Effector
  4. I would try deleting the Phong Tag and see if it helps. Then put each individual part in a Connect and test different Weld values
  5. I've never heard of a method of assigning all variables in a list. It's definitely unorthodox. Could that be a programming paradigm from an other language ? Flask, Pearl, Julia, Elixir ? Maybe he learned to code using ChatGPT... Either way if he documents(explains) well what he does I'd probably watch him but write the code my own way. At the end of the day it's about why you watch this stuff, is it to learn how to code Python or to learn how to do something specific in Python. It's more common for people to dislike content they are already knowledgeable on. Last month someone Uploaded a video "Why you should DUMP Cinema 4D for HOUDINI (in 2023)" all of his 10 "reasons" were crap and I commented on that debunking his debunking... I more or less got the same reply you got. People are just people. They don't have a sense of public criticism that's why they just upload their own biased and subjective thoughts.
  6. Accidentally changing dark skin to while ... ago2orq_460svav1.mp4
  7. Maybe a Quad breaks planarity when more than 2 points are co-linear. I see your Quad resembles a perfect triangle. Does the detection hold true when you break the Quad into two triangles ?
  8. One more thing I found but don't want to polute with a new post Txt-to-Video with GEN-2 AI Txt-to-Video with GEN-2 AI.webm
  9. Nice smooth animation. I know it as the Geneva Gear, a special kind of gear that translates continuous motion in discrete motion. It's used in clocks to move the seconds pointer. I think it's mostly geometry what's has to be considered when making animations like this because I like to make them work in dynamic simulations. I assume the XPresso setup only gets more complicated when more gears get to cooperate.
  10. I wanted to share this but not make a new post axoPVEL_460svav1.mp4
  11. Just that they don't rotate. Yes, they are curved like grandstands but not rotate like in your scene
  12. I think Pyro is the feature to blame for the hardware support drop. Any new feature requiring "tensor architecture" won't be able too run in older CPUs/GPUs. At least it's a good thing they try to keep it compatible both with AMD and NVidia. Unlike Redshift which as far as I know is exclusively Nvidia compatible.
  13. Writing a book is really exhausting. He probably had a breakdown designing the cover at the end. But on the other hand he could use one of the images from inside the book...
  14. MAXON Future versions of Cinema 4D and Redshift will require AVX2 support on Windows and Linux. Introduced with 2013’s Intel Haswell processor and later in AMD's Excavator CPU family, the AVX2 (Advanced Vector Extensions 2) instruction set provided new features, instructions and a new coding scheme to developers. As Maxon continues to modernize and improve Cinema 4D and Redshift over time, the inclusion of the AVX2 set will now become a requirement moving forward. Most users won't be impacted as the obsolete CPUs are 8 to 10 years old. But for those who will be affected, we want you to understand in advance that future versions of Cinema 4D and Redshift will not run on their legacy systems. Obsolete CPUs are: Anything before Q2 2013 from Intel Desktop CPUs Anything before Q2 2013 from Intel Server CPUs (AVX2 introduced with the Xeon-Haswell based family of CPUs) Anything before Q2 2015 from AMD CPUs (AVX2 introduced with Excavator family of CPUs) If you have any doubts as to whether your CPU utilizes AVX2, you can use CPU-Z to evaluate the instruction sets supported by your CPU. We understand this may be a challenge, but we believe finding a balance between supporting older technology and embracing modern technology is an important step in keeping our software ready for the future. Polycam Polycam has added a new 360 Capture feature to Polycam, its 3D scanning app for Android, iOS and web. The AI-based feature makes it possible to capture spherical 360-degree images on devices without 360-degree cameras, for use as environments in DCC software or game engines. The toolset, which is based on generative AI model Stable Diffusion, was rolled out in the iOS edition of the app earlier this week in the Polycam 3.2.1 update. First released in 2020, Polycam takes advantage of the LIDAR sensor in new iOS devices to build up full-colour textured 3D scans of the user’s surroundings. The app’s Room Mode generates floor plans of interior spaces from scans, and its photogrammetry-based Photo Mode generates textured 3D geometry from photos of objects or terrain. The free edition only exports in glTF format; the Pro edition exports a range of standard 3D file formats, including OBJ, FBX, Collada, STL, and point cloud formats including PLY, LAS and XYZ. To that, Polycam has added 360 Capture, a new toolset for capturing 360-degree images on mobile devices without built-in 360-degree cameras. Users rotate their phone in the same way they would to capture a panoramic image, with the app automatically filling in the top and bottom of the spherical environment. The missing poles of the environment are generated using Stability AI’s Stable Diffusion AI model. As well as being used as a background for 3D captures, the resulting 360-degree image can be used as an environment within DCC software or game engines – the launch video namechecks Blender, Cinema 4D, Unity and Unreal Engine – although it isn’t an HDR image, limiting its use for environment lighting. The new feature is available in the free base edition of the app, although output is watermarked, so you will need a paid Pro subscription to export a clean image. Polycam is available for Android 8.0+, iOS 15.2+ and iPadOS 15.2+, and online. The base edition is free, and includes unlimited LIDAR captures, but has a lifetime cap of 180 Photo Mode captures, and only exports to other DCC applications in glTF format. The Pro edition now costs $14.99/month or $79.99/year, and unlocks the other export formats, the building plan tool, unlimited Photo Mode captures, and use of Photo mode online. https://apps.apple.com/us/app/polycam-lidar-3d-scanner/id1532482376 RealityScan 1.1 Epic Games has released RealityScan 1.1, the latest version of its free iOS photogrammetry app, which turns photos of real-world objects into textured 3D models for use in AR, game development or general 3D work. The release, the first major update to RealityScan since its release last year, improves workflow when removing unused images, deleting projects, and exporting models to Sketchfab. RealityScan 1.1 is primarily a workflow and performance update. The UI has been reworked to make it easier to remove unused images, and to manage projects, including the option to select and delete multiple projects from the Project Library. It is also now possible to name and add descriptions to models before exporting to Sketchfab, and to preview the final model in the embedded Sketchfab viewer. Since the original release, Epic has also switched from JPEG to HEIC format for source images, reducing file sizes – and therefore upload times – by “around 50%”. In addition, the Smooth preset, which generates a smoother, if less detailed mesh, is now the default. It is “around 3-4x faster” than the Detailed preset. RealityScan 1.1 is available free for iPhones and iPads running iOS 16.0+ and iPadOS 16.0+. An Android version is due “in 2023”. https://apps.apple.com/us/app/realityscan/id1584832280 Radeon ProRender 3.6 for Blender AMD has released Radeon ProRender 3.6 for Blender, the latest version of its free GPU renderer. The update switches the software from OpenCL to HIP for final-quality GPU rendering, and improves interactive rendering, adding support for displacement and better real-time GI. The update was released alongside Radeon ProRender 3.4 for Maya and Radeon ProRender 2.4 for Houdini and USD, the latest versions of the Maya and Houdini editions of the renderer. All three updates implement some of the key changes from Radeon ProRender 3.1, the latest version of the Radeon ProRender SDK, which was released earlier this year. One of those is to switch the API used for final-quality rendering from OpenCL to HIP, AMD’s own open-source technology, also now used in Blender’s own Cycles renderer and Redshift. HIP is now used as the backend for Radeon ProRender’s RPR – Final render mode, although the legacy OpenCL backend is still available for older GPUs. Although the change may result in a “slight performance boost on some cards”, the main reason is the decline of support for OpenCL within CG software development. In addition, interactive rendering mode RPR – Interactive – which uses Vulkan rather than HIP or OpenCL – has been updated to brings its output closer to RPR – Final. Interactive rendering now uses the Nvidia-developed global illumination algorithm ReSTIR for direct lighting, and AMD’s own FSR 2 technology for render upscaling. RPR – Interactive is also now better supported on older GPUs without dedicated ray tracing cores, and cards with less than 8GB of graphics memory. Other key changes include support for displacement, and better support for MaterialX: RPR-Interactive supports all of the materials in AMD’s free online MaterialX material library. In its blog post, AMD claims that RPR – Interactive is “nearly as interactive as Eevee”, Blender’s real-time render engine, and “close to the lighting quality of Cycles”. You can judge the results for yourself in the side-by-side comparison video embedded above. Radeon ProRender for Blender 3.6 only: better support for Geometry Nodes Changes unique to the Blender plugin include support for the new Mix node introduced in Blender 3.4, and better support for the Curves Info and Math nodes. In addition, the UI of the toon shader added in Radeon ProRender for Blender 3.2 has been updated to support single-colour, three-colour and five-colour modes. Radeon ProRender 3.6 for Blender is compatible with Blender 2.93+ on Windows, Linux and macOS. Radeon ProRender 3.5 for Maya is compatible with Maya 2020+ on Windows and macOS. Radeon ProRender 2.4 for Houdini is compatible with Houdini 19.5 on Windows, Linux and macOS. Source code for all three plugins is available under Apache Licence 2.0. https://github.com/GPUOpen-LibrariesAndSDKs/RadeonProRenderBlenderAddon/blob/master/Changelog.md https://github.com/GPUOpen-LibrariesAndSDKs/RadeonProRenderMayaPlugin/blob/master/CHANGELOG.md https://github.com/GPUOpen-LibrariesAndSDKs/RadeonProRenderUSD/blob/master/CHANGELOG.MD Substance 3D Designer 13.0 Adobe has released Substance 3D Designer 13.0, the latest version of its material-authoring software. The update adds a new set of Spline and Path tools for generating and manipulating 2D shapes, and adds new ‘portal’ functionality to the Dot node to help keep node graphs organised. However, the software’s procedural geometry toolset has been discontinued, with model graphs removed from Substance 3D Designer entirely, less than two years after they were introduced. The update introduces a new set of Spline and Path tools, for generating and working with 2D shapes, either as continuous splines or a series of line segments. The Spline tools consist of 25 new nodes for generating, merging, transforming and rendering splines. The Path tools consist of 10 nodes to generate paths from grayscale masks, then convert them to splines. The resulting splines can then be rendered directly to generate surface patterns for materials; used to control the scattering of other patterns; or used to control the warping of images. Other changes include an update to the Dot node, used to visually simplify node graphs by rerouting and grouping connections between nodes. In Substance 3D Designer 13.0, it gets new portal functionality, making it possible to transfer data from one point on the graph to another without generating a wire connecting the two points. In addition, the underlying architecture has been updated to Substance Engine 9, which introduces support for loops, making it possible to repeat functions within Substance Function Graphs. Use cases shown in this video tutorial include generating a grid of sample points to create a custom blur. The update also introduces a Home Screen similar to those in other Adobe products, providing quick access to tutorials and project settings; and native French, Italian and Portuguese language editions. However, perhaps the biggest change in Substance 3D Designer 13.0 is not what has been added, but what has been taken away. Substance model graphs, the basis of the software’s procedural geometry system, have been removed, less than two years after they were originally introduced in Substance 3D Designer 11.2. The new release can no longer open or edit model graphs created in previous versions of the software. In a post on its community support forum, Adobe attributed the decision to lack of user uptake: “Our team has [analysed] adoption rates for the feature, and we’ve come to the conclusion that Designer is not the optimal platform for procedural geometry within the Substance 3D ecosystem. “To fully concentrate on delivering [new features] for material artists, we have made the pragmatic decision to cease development of Substance model graphs and remove the feature from Designer entirely.” The decision returns the software to what it was two years ago: a dedicated material-authoring tool. Substance 3D Designer 13.0 is available for Windows 10+, CentOS 7.0/Ubuntu 20.04+ and macOS 11.0+ Perpetual licences of the software are available via Steam and cost $149.99. It is also available via Adobe’s Substance 3D subscriptions. Substance 3D Texturing subscriptions cost $19.99/month or $219.88/year; Substance 3D Collection subscriptions cost $49.99/month or $549.88/year. Subscriptions to the Linux edition require a Creative Cloud Plan for Teams priced at $1,198.88/year. https://helpx.adobe.com/substance-3d-designer/release-notes/version-13-0.html Arnold 7.2.2 Despite the small change in version number, Arnold 7.2.2 is a fairly significant update in terms of render performance and quality. When editing lights or shaders in complex scenes – those with “millions of instances” – the time to first pixel is “nearly instantaneous, instead of having to wait multiple seconds”. Autodesk suggests that the changes will be particularly significant when working on scenes that use the USD Point Instancer. Global light sampling performance – a focus of the previous release – has been further improved, reducing render times on scenes with large numbers of lights. Arnold’s CPU renderer should now generate visually identical results across supported hardware. According to Autodesk, while images rendered on AMD, Apple and Intel CPUs were previously often “perceptually equivalent”, they should now be “perceptually identical”. The amount of noise in volumes when using a mesh light with very small triangles has also been reduced, as shown in the image above. In addition, support for USD and MaterialX has been further extended, with the update making it possible to render MaterialX nodes defined in third-party node definitions. Arnold’s integration plugins have also been updated to support the new features: 3ds Max: MAXtoA 5.6.3 Cinema 4D: C4DtoA 4.6.3 Houdini: HtoA 6.2.2.0 Katana: KtoA 4.2.2.0 Maya: MtoA 5.3.2 Arnold 7.2.2 is available for Windows 10+, RHEL/CentOS 7+ Linux and macOS 10.13+. Integrations are available for 3ds Max, Cinema 4D, Houdini, Katana and Maya. GPU rendering is supported on Windows and Linux only, and requires a compatible Nvidia GPU. The software is rental-only, with single-user subscriptions costing $50/month or $400/year. https://help.autodesk.com/view/ARNOL/ENU/?guid=arnold_core_7220_html Coil Pro Mocap hardware firm Rokoko has announced the Coil Pro, a nifty electromagnetic field generator that will become the first product from the firm’s new Volta Tracking Technology platform on its release later this year. It will provide absolute positioning data for Rokoko’s Smartsuit Pro II and Smartgloves inertial capture systems, reducing drift and improving the accuracy of multi-character interactions. Rokoko has also cut the price of the Smartsuit Pro II by $1,000. Inertial measurement unit-based (IMU) capture systems provide a relatively inexpensive, non-restrictive way to record full-body motion for games, animation and visual effects projects. Unlike optical capture systems, inertial systems don’t suffer from occlusion, in which a foreground actor or object prevents accurate recording of a background actor. However, because they don’t record absolute position, the motion recorded often ‘drifts’ in space over time, even when an actor was moving on the spot. The Coil Pro aims to fix – or at least reduce – the issue, by making it possible to combine measurements based on an electromagnetic field (EMF) with IMUs. The device generates an electromagnetic field that receivers in Rokoko’s capture suits can detect, enabling its technology to reconstruct an actor’s position in space. The field has a usable radius of around 5 metres, but if an actor moves outside it, the existing IMUs in the suit take over, and resync when the actor re-enters the capture volume. Although electromagnetic systems historically aren’t as accurate as optical systems, they’re inexpensive and quick to set up. According to Rokoko’s blog post, the Coil Pro can be used pretty much anywhere its suits can, “mounted on all standard tripods, hung from the ceiling, or just placed on [a] desk”. https://www.rokoko.com/products/coil-pro Substance 3D Painter 9.0 The major new feature in Substance 3D Painter 9.0 is the 3D path system, which makes it possible to apply paint strokes to a model along editable guide curves. Users can draw out Bezier-based curves over a 3D model by clicking on its surface in the viewport, with Substance 3D Painter automatically applying the paint stroke along the curve. Closing the curve creates a seamlessly repeating pattern. Workflow is non-destructive, with both the brush properties and the curve itself remaining editable throughout, making it possible to reshape or change the look of existing paint strokes. As well as painting, users can erase or smudge existing paint layers along a path. It is possible to use the new 3D path system to create or edit mask layers as well as paint layers; and the implementation supports other standard paint features like symmetry and dynamic strokes. It doesn’t work with some existing tool brushes (shown at 01:20 in the video at the top of the story), but comes with new tool presets for creating stitches, zippers, welds, puckers and seams. In addition, the Dynamic Strokes system, which changes the brush stamp properties along the length of a stroke, has been updated. New features include the option to define the stamps used for the start, middle and end of a stroke; and size and spacing properties for strokes. There are also new stroke length properties for use with the 3D path system, making it possible – among other things – to create strokes with repeating patterns that update according to distance drawn. In addition, Substance 3D Painter’s default base materials have been updated “to make them more useful to everybody”. The legacy materials are still available via Adobe’s online Substance 3D Assets library. Other changes include the option to update textures in a library when reloading glTF files. It is also now possible to add project path information when exporting files in USD format; and project creation and configuration parameters for the USD format are now exposed for Python scripting. Substance 3D Painter 9.0 is available for Windows 10, CentOS 7.0/Ubuntu 20.04+ Linux and macOS 11.0+. Perpetual licences are available via Steam and cost $149.99. The software is also via Adobe’s Substance 3D subscriptions. Substance 3D Texturing subscriptions cost $19.99/month or $219.88/year; Substance 3D Collection subscriptions cost $49.99/month or $549.88/year. Subscriptions to the Linux edition require a Creative Cloud Plan for Teams priced at $1,198.88/year. Photoshop 24.6 Photoshop 24.6 makes a number of workflow improvements to the software’s layers system. Arguably the most significant is that painting is now non-destructive when working with text, shape, video layers or smart objects, as shown in the video above. New brush strokes are created in a new layer, rather than over the content of the existing layer. It is also now possible to deselect a layer intentionally while the Move tool is selected; and the Layers and Tools panels get new tooltips, some with short embedded videos. The iPad edition of the software included free with Photoshop subscriptions has also been updated, with Photoshop on the iPad 4.7 adding the Stroke and Drop shadow layer effects from the desktop edition. Users also get more options when double-tapping with the Apple Pencil, including Show color picker, Switch to eraser, Switch to last tool, Zoom to fit and Undo. Photoshop 24.6 is available for Windows 10+ and macOS 11.0+ on a rental-only basis. In the online documentation, the update is also referred to as the June 2023 release. Photography subscription plans, which include access to Photoshop and Lightroom, start at $119.88/year. Single-app Photoshop subscriptions cost $31.49/month or $239.88/year. https://helpx.adobe.com/photoshop/using/whats-new/2023-4.html Masterpiece X Masterpiece Studio has released Masterpiece X, an interesting free application that lets indie game developers create new assets by ‘remixing’ content from an online library inside virtual reality. The software, which is currently in early access, enables users with Meta Quest 2 headsets to generate 3D characters with custom geometry, textures, rigs and animations. Masterpiece Studio has been developing virtual reality content creation tools for some years now, releasing its current flagship product, VR sculpting, rigging and animation tool Masterpiece Studio Pro in 2021. Masterpiece X is aimed at a rather different audience, being intended to let hobbyists and indie game developers create custom 3D content without the need to develop it completely from scratch. Masterpiece X lets users create custom 3D assets by ‘remixing’ content from Masterpiece’s Community Library, an online library of stock content available under a Creative Commons CC0 licence. The video above shows the process for a humanoid character, but the promo embedded at the top of the story shows a wider range of content, including creatures, vehicles and environment assets. Users can change the shape and colour of the model by manipulating the mesh directly in virtual reality, and painting directly on its surface. For character rigging, the software comes with auto-rigging and auto-skinning systems, although it’s possible to work manually, drawing out bones and painting weight maps in virtual reality. Animation can also be done by keyframing the character manually, although given the target audience, we imagine that most users will simply apply readymade animation clips from the built-in library. Once complete, the 3D asset can be exported to the Community Library, from where it can be downloaded in standard 3D file formats. Unlike with the free edition of Masterpiece Studio Pro, it is possible to keep content uploaded to the library private, although you can choose to make it available to other users. Masterpiece Studio also plans to add generative AI capabilities to Masterpiece X: the promo material shows text-to-3D and text-to-animation systems. They are currently in closed alpha, but you can apply to join the waitlist here. Masterpiece X is compatible with Meta’s Quest 2 and Quest Pro headsets and Touch controllers. The app is currently available free in early access: Masterpiece Studio tells us that it plans to add paid tiers later, but that the base app will remain free. https://www.masterpiecex.com/blog/introducing-masterpiece-x https://www.oculus.com/experiences/quest/5502306219889537/ Corona 10 for 3ds Max and Cinema 4D Chaos Czech has released Corona 10, the latest version of its renderer for 3ds Max and Cinema 4D, extending Corona Decal and support for volumes in the Corona Camera, and improving highlight blurring. Corona 10 features updates to several of the software’s existing features, including decals, volume rendering, depth of field and procedural clouds. Texture projection system Corona Decal can now be used to control individual material channels, making it possible to use it to control displacement-based effects like cracks, or footprints in sand. The Corona Camera now properly supports volumes, including the Volume material, simulations created in Chaos’s Phoenix software, and VolumeGrids for OpenVDB objects. The change makes it possible to position the render camera half inside a volume – for example, at water level in a swimming pool – without having to position a Slicer around the camera to cut a hole in the volume. Other rendering improvements include a new DOF Highlight Solver, intended to generate better-defined highlights in parts of an image blurred by depth of field. It supports custom aperture shapes. The new procedural clouds system added in Corona 9 is now affected by the Direct Color property of the Corona Sun, making it easier to create time-of-day effects. In addition, caustics are now brighter and more detailed when rendering at 4K resolution and above. Workflow improvements include updates to the software’s listers for viewing and editing scene objects, with the unified Corona Lister now listing all of the lights, proxies, displacement materials and cameras in a scene. In the Cinema 4D edition, the Scatter Lister has been added to the main lister; in 3ds Max, it remains a separate interface elements, but has been “totally reworked”. Changes unique to the 3ds Max edition include the option to apply the same Triplanar, Color Correction or Mapping Randomizer to multiple maps. The material editor also now caches rendered previews, making it “up to 22 times faster”. Changes unique to the Cinema 4D edition include the option to nest scatters when using the Chaos Scatter plugin: for example, to instance a set of cans, then instance condensation droplets over their surfaces. Outside the software itself, the Corona Benchmark has been updated to the Corona 10 rendering core. There is also a long list of smaller changes: you can find a full list via the link at the foot of this story. Support for online rendering system Chaos Cloud and the unification of the Corona Material Library with Chaos Cosmos, both initially scheduled for Corona 10, have now been moved back to Corona 11. Corona 10 is compatible with 3ds Max 2016+ and Cinema 4D R17+. The software is sold subscription-only online. Corona Solo subscriptions cost $53.90/month or $358.80/year. Corona Premium subscriptions cost $67.90/month or $478.80/year. https://blog.corona-renderer.com/chaos-corona-10-released/ Substance 3D Stager 2.1 Adobe has released Substance 3D Stager 2.1, the latest version of its scene layout and rendering tool. The update introduces a new AI-based GPU denoiser that cuts render times by “up to 99%”, improves turntable rendering, and extends support for USD data and Substance SBSAR materials. New features in Substance 3D Stager 2.1 include an experimental new AI-based GPU denoiser, which Adobe describes as “cutting render times by up to 99%”. That’s presumably the maximum possible time saving, but if the figure for a typical scene is anywhere close to that, it’s an impressive performance boost for a denoiser. The software’s turntable rendering system gets another update, with users now able to edit the scene while the animation is playing, and the option to export the animation in MP4 and GIF format. USD support has also been extended with users now able to import and export lights and Stager-specific elements like the ground plane and backplates. Other changes include better handling of materials in Substance SBSAR format, including support for physical size, and the option to import SBSAR texture atlases and display SBSAR presets. Substance 3D Stager 2.1 is available for Windows 10+ and macOS 11.0+. Perpetual licences cost $149.99. The software is also available as part of Adobe’s Substance 3D Collection subscriptions, which cost $49.99/month or $549.88/year for individuals; $1,198.88/year for studios. https://helpx.adobe.com/substance-3d-stager/release-notes/version-2-1-0.html Skybox AI 0.5 Blockade Labs has released Skybox AI 0.5, the latest version of its browser-based generative AI tool for creating skybox environments from text prompts or rough sketches. The update adds free user accounts on the Skybox AI website from which users can browse or remix their previous skyboxes and review the text prompts used to generate each one. Currently available free in early access, Skybox AI is an AI-based online tool that enables users to generate 360-degree panoramic images for use as backgrounds in games or DCC applications. To guide the result, users enter a 400-character text prompt to define the look and content of the skybox. Users can also choose from a set of 20 preset art styles, including both photorealistic and stylised looks suitable for games, animations, matte paintings or architectural visualisations. Once generated, the skybox can be downloaded as a 6,144 x 3,072px JPEG image in latlong format, with the option to generate a corresponding 2,048 x 1,024px depth map. Although Skybox AI has generated a lot of positive community feedback since its launch, excitement kicked up a notch last month with the rollout of Sketch mode. Available on desktop machines and larger tablets, it enables users to sketch out the layout of the skybox they want to generate and have Skybox AI turn the rough doodle into a detailed environment. Early user examples ranged from tracing over a photo of the Himalayas to generate a fantasy mountainscape to generating a sci-fi corridor inspired by the TRON movies. To that, Skybox AI 0.5, released earlier this week, adds the option to register for a free user account on the Skybox AI website itself. Accounts function as a history feature, enabling users to browse their previous skyboxes and review the prompts used to create them, and to re-download or remix the images. Skybox AI is currently available free in beta. It’s browser-based, so it should run in standard modern web browsers, although some features are only available on desktop machines or larger tablets. Blockade Labs hasn’t announced a final release date or pricing yet. SkyBoxGenerator Tools developer Adem Kilic has released SkyBoxGenerator, a handy new plugin for generating skyboxes directly inside Unreal Engine using generative AI-based online service Skybox AI. The plugin itself is free, though you will need a Skybox AI API key to use it. SkyBoxGenerator makes it possible to use Skybox AI directly inside Unreal Engine, entering a text prompt inside the Unreal Editor to generate a 360-degree environment image. Additional controls make it possible to choose between Skybox AI’s readymade visual styles, remix an existing image, and even access the new history feature in Skybox AI 0.5. The plugin supports Skybox AI’s depth map generation, making it possible to set up depth-based effects. SkyBoxGenerator is a free download. It is compatible with Unreal Engine 4.27, but Skybox AI developer Blockade Labs has tweeted that it is working on a version for Unreal Engine 5. To use it, you will need to enter your Skybox AI API key, which requires a paid subscription: prices start at $20/month. For artists looking to do something similar in Unity, Blockade Labs has its own Unity SDK. https://github.com/ademkilic7/SkyBoxGenerator Unity Muse Unity has unveiled Unity Muse, a new generative AI service that will enable games artists to create assets based on a combination of text prompts and rough sketches. The platform was announced alongside Unity Sentis, which will enable game developers to embed AI models into the Unity runtime itself. Both are currently in invite-only beta. The new services are the first public results of the Unity AI initiative, announced at GDC 2023 via a memorably vague teaser campaign. Of the two, generative AI platform Unity Muse is the one aimed at games artists. Unity’s blog post describes it as enabling users “to create almost anything in the Unity Editor using natural input such as text prompts and sketches”. The initial release is rather more limited, since it consists of only the AI search component of the platform. With Muse Chat, users can type in natural-language queries to have Muse Chat search Unity documentation and training to get “well-structured, accurate, and up-to-date information … including working code samples”. Similar functionality is currently available in third-party add-ons like AICommand, the proof-of-concept integration of ChatGPT into Unity that Unity engineer Keijiro Takahashi released earlier this year. However, Unity aims to roll out generative AI capabilities “over the next few months”. Those shown in the demo video include motion synthesis and texture synthesis based on text prompts, and 2D sprite generation based on text inputs and inpainting. Unity Sentis is aimed at developers, and makes it possible to embed an AI model into the Unity runtime itself, “enhancing gameplay and other functionality directly on end-user platforms”. The teaser video, embedded above, suggests that it could be used to generate Ai-driven characters. Unity Muse and Unity Sentis are currently in invite-only beta. You can apply to join the beta program here. Unity hasn’t announced a release date or system requirements for either service, or whether they will be available free to Unity users or via paid subscriptions. https://blog.unity.com/engine-platform/introducing-unity-muse-and-unity-sentis-ai Machina Fractals: Mecha Machina Infinitum has now released its second Unreal Engine fractal plugin, Machina Fractals: Mecha. As well as providing six different fractal formulae, the new plugin includes a couple of more advanced features, including the option to generate PBR texture maps using the metallic-roughness workflow. Unlike in Machina Fractals: Essence, it is also possible to loop parameters, enabling users to generate fractals that evolve continuously in real-time, for use in live projections and VJ sets. The plugin ships with three VJ example levels and two Blueprint templates for VJ projects. As with Machine Fractals: Essence, it can also be used to generate collisions meshes for games, and is designed to integrate with both Houdini and the free Houdini Engine plugin for Unreal Engine. Machina Fractals: Essence is compatible with Unreal Engine 5.0+, Machine Fractals: Mecha with Unreal Engine 5.1+. Machina Infinitum recommends at least a Nvidia GeForce RTX 2080 Ti GPU and/or 8GB VRAM. Machine Fractals: Essence now costs $99.99. Machine Fractals: Mecha costs $119.99. https://www.machina-infinitum.com/unrealengine RealityScan Epic Games has made RealityScan, its free photogrammetry app, available on Android as well as iOS. The app turns photos of real-world objects captured using the camera in a mobile phone or tablet into textured 3D models for use in AR, game development or general 3D work. Released last year, RealityScan is intended to make the 3D scanning capabilities of tools like RealityCapture, the photogrammetry software that Epic acquired in 2021, accessible to hobbyists as well as DCC pros. Data is processed in the cloud, from where the resulting 3D model can be exported to Sketchfab, Epic’s model-sharing platform, and downloaded in FBX format for use in other DCC applications and game engines. Files uploaded to Sketchfab are automatically transcoded into glTF and USDZ formats, supported in applications including 3ds Max, Blender, Cinema 4D, Maya, Unity and Unreal Engine. The new Android edition makes RealityScan available for phones and tablets that support ARCore, Google’s augmented reality framework: most devices from the past five years should work. The Android version has all of the new features from RealityCapture 1.1, the latest iOS release. RealityScan is compatible with Android phones and tablets that support ARCore, running Android 7.0+, and with iPhones and iPads running iOS 16.0+ and iPadOS 16.0+. It’s a free download. Basic Sketchfab accounts are free. https://play.google.com/store/apps/details?id=com.epicgames.realityscan https://apps.apple.com/us/app/realityscan/id1584832280 Vantage 2.0 First released in 2020, Vantage is a hardware-accelerated ray tracing renderer intended for exploring large production scenes in near-real time. Chaos’s pitch for using Vantage over other real-time rendering solutions – particularly Unreal Engine, which is free for this kind of work – is ease of use. Rather than having to convert V-Ray scenes created for offline renders for use in a game engine – still a time-consuming process, despite tools like Datasmith – Vantage can render the original .vrscene files. It doesn’t support every feature of V-Ray – you can find a list of supported features for each V-Ray host application in the online documentation – but it’s intended to be a close visual match. As well as navigating the 3D environment, using game-like controls with automatic collision detection, users can perform edits to the scene inside Vantage. Although Vantage 2.0 adds new features across the board, some of the most significant are its new animation capabilities. Whereas the software was originally geared mainly towards rendering camera animations, it can now render all of the animated materials, texture and lights in .vrscene files. Equally importantly, it can render deforming meshes, like animated characters. Chaos pitches the change as making Vantage 2.0 “as powerful for VFX as it is for architectural visualization”, with suggested use cases including previs, look dev and animation playblasts. Other major new features include a Scene States system for creating and rendering variations of a scene, including variant object placement, materials and lighting, for look exploration or client reviews. The software also now supports nested scenes – that is, .vrscenes referenced within other .vrscenes – enabling “more scene assembly scenarios between 3D creation tools”. Other new features include Chaos Scatter, the object instancing and scattering tool recently introduced into V-Ray itself, and support for volumetric fog. Vantage also now supports V-Ray’s override material, .vrmat materials, and materials with multiple UV channels: for example, those with stacked decals. In addition, it is now possible to create lights directly inside Vantage – Point, Spot, Directional, Rectangle, Disc, Sphere and IES lights are supported – and to render Mesh lights in .vrscenes. The update also adds support for orthographic cameras, and “better and more structured” camera grouping. Rendering changes include the option to render AOVs, with available Render Elements including lighting components, Z-depth, velocity, and object and material masks. For render denoising, Vantage now integrates Intel’s Open Image Denoise (OIDN), while the existing Nvidia denoiser now supports render upscaling. Workflow improvements include a quality slider for balancing render quality against interactive performance; and the option to render specific frames or frame intervals. Another key change is that Vantage now supports AMD GPUs: although the software has always used DXR rather than Nvidia’s OptiX API, the initial release only supported Nvidia GPUs. Vantage can now run on “DXR-compatible” AMD graphics cards: that is, those with dedicated ray tracing cores, like the Radeon RX 6000 and 7000 and Radeon Pro W6000 and W7000 Series. The software also now supports HDR monitors. Vantage also now has definitive pricing, Chaos having made one-year licences of the software available for free throughout most of the Vantage 1.x release cycle. Subscriptions were originally expected to cost $389/year when rolled out, but Chaos told us last year that it was planning to review that figure, and indeed, the price of Vantage 2.0 is significantly higher, with US resellers offering subscriptions for $658.80/year. Chaos Vantage is available for Windows 10 only. It requires a DXR-compatible AMD or Nvidia GPU. The software is rental-only, with subscriptions costing £86.90/month ($108.90/month) or £520.80/year ($658.80/year). It is included free with V-Ray Premium and Enterprise subscriptions. https://www.chaos.com/vantage/whats-new https://docs.chaos.com/display/LAV/Chaos+Vantage%2C+v2.0.0
  15. It all depends what level of quality you're after. If it's a game asset the camera mapping is just fine. If it's a product placement you'll need more effort. For the binding lines, you could edit the texture in Photoshop and import it with a mask on top of the procedural shader. This way you keep the binding. Try enlarging the texture with some AI help for better quality.
  16. The steps do not rotate. They are normal. There are only two light sources. both out of camera view
  17. I'd use Hair. (none of the following is tested I'm just describing how I would approach the a solution) Model the lamp with a Lathe (because it's not too spherical) and make cuts to define the regions of the strands. Disconnect those regions to make them different objects. Retopologize these regions in order to make "flows" (Cerbera can help). Select all downward flow edges and convert them to splines. You don't need to make too many of them. About 6-7 are enough. Then convert those to guides using Hair. Use Hair to populate between guides. Just to make clear why I'd choose this method is that from what I perceive from this image is that the organic element is made out of straws which is too much to model by hand and not too much to do with hair. So you don't need to render hair, you just need them to generate enough splines for sweeping. I've counted 24 long ones and just 15 short (vertical). After the hair have been generated I'd make sure the hair are generated as splines Then I'd Sweep the Hair. To make the strands look more uneven in their width along the flow I'd throw in a Displacer Deformer with a simple noise also elongated. Then add a texture for finer detail. Use a different Hair generator for the vertical binding strands. For the vertical ones, I think you can use a Hair Material (kink/frizz/wave whatever) too deform the actual guide but I forgot how that can be done... Alternatively you can use an Effector to deform those binders to look zig-zag...
  18. Try using the Force Field. It supports splines.
  19. It depends on the final intend what strategy you should follow. To all add to what other people have suggested there is also a Grid Array From Object Mode which is the most common way to voxelizeo objects in C4D. But judging from the images you posted (assuming this is not your faked example but your actual reference to copy) I think the best solution is also the easiest. Just create all your elements and render your animation as usual, if you don't already have a video/image. Then just create a simple Grid Array with your planes and make sure you set the exact number of X and Y Instance Count as the number of X and Y pixels of the video/image. Then create a Material and import the video/image as a texture to the Luminance channel and assign the Material to the Cloner itself in Plane Projection Mode. This is how the original pictures where made because as you can see each plane has more than one pixels on it. If you need to assign each plane to only one pixel the Jed's solution is the one.
  20. Big minds find each other. 5 days ago I wrote that same suggestion for the upcoming C4D.
  21. Just saw this, looks like the next Java version won't look much like Java but Python. There is no doubt that there must be an evolutionary convergence on how programming languages are designed and operate but when that happens the comparison standards get blurry and when two or more things get too similar people get confused and polarized.
×
×
  • Create New...