How to stop the Unity Editor from continuously rendering?

Look closely at the image below. Even though the game is not running the scene shows that green wheel is spinning. This means the scene is being rendered constantly and my battery is dying.

enter image description here

I know Unity used to not do this and was in fact one of the reasons I liked Unity over some other engine that defaulted to always running and would make my laptop fans start up even when just sitting idle in the editor.

Is there any way to get Unity to render on demand. Meaning, only render when I interact with it, when I move the camera or change a setting like it used it?

I saw in preferences there’s an “interaction mode” setting but the best it can be set to is don’t render faster than 30fps. That’s not useful.

Is there some other setting to get the editor to only render when interacted with?

unity – Efficiently rendering lots of the same mesh in Unity3D, but with different colors?

I’m working on a tile based game, where grass is spreading from tile to tile, so soon lots of grass appear on the board.

Instancing is on, so the FPS is kind of good, even with 300k triangles (1 grass leaf consists of 90 triangles).

BUT: Tiles can be wet and dry on floating scale, which makes the grass leaves turn from green to yellow.

Currently I implemented this simply as such:

var r = originalColor.r + 0.27f * (1f - (ground.Status / 100));
renderer.material.color = new Color(r, originalColor.g, originalColor.b);

Which creates a new material for every grass…

Obviously I can’t change the sharedMaterial’s color, because that would change all other grass’ colors as well.

So how could I efficiently render grass on a big spectrum from yellow to green?

One thing I came up with is that I create only like 100 materials, with 1 percent steps.

  • = first material is 100% yellow, and the last one is 100% green.

And when a grass turns a bit more green, it doesn’t have to create a whole new material with 3.47% green, but instead can use the already created 3% green material.

Any better ideas?

Thanks!

seo – Is server side rendering without any styling acceptable for a single page application (that is heavily styled) in order to satisfy most search bots?

Let’s say that you have a big / heavy single page application (SPA) that can be represented statically in 100,000+ pages.

Will it be acceptable to have a back-end that dynamically generates pages that are very similar in structure and content to the pages generated by the SPA but with no styling – i.e generating a craigslist-like version of the SPA just for the search engines.

Will it be considered cloaking or will it be okay? (Given that the routing will the same and the structure and the content of the pages will be 95%+ similar to the structure and the content of the SPA)

The reason for me thinking about this approach is that it will save both a lot of development time (in this particular scenario … because otherwise styling inconsistencies have to be handled) and unimaginable (in this particular scenario) amount of computational resources (that otherwise would be needed to cache everything using headless chromium).

Error when rendering with select2 Django

Good night …. I am trying to do a semi-automatic entry, with select2 … but when rendering the screen this error …

my HTML code

[![inserir a descrição da imagem aqui][1]][1]


my select2


[![inserir a descrição da imagem aqui][1]][1]

rendering error

[![inserir a descrição da imagem aqui][1]][1]

please where is the error ???

Default rendering of CSR rendering of the SharePoint list

I was looking on the Internet for some good examples, but mainly related to rendering the default rendering method for forms.

My case is that I have a page with the same list in two webparts with different views, for one I want to apply CSR for the other I just want to have the default rendering. some people have mentioned that you can check the view ID and call RenderFieldValueDefault (ctx) but it doesn't seem to work, the output is zero.

Here's how I tried it:

 SP.SOD.executeFunc('clienttemplates.js', 'SPClientTemplates', function () {
  SPClientTemplates.TemplateManager.RegisterTemplateOverrides({
    Templates: {
      Fields: {
        AnyFieldInternalName: {
          View: function (ctx) {
            if (ctx.listName !== 'some_unique_id') {
              return RenderItemTemplate(ctx);
            }
            return 'haha';
          },
        },
      },
    },
    OnPreRender: __customPreRender,
    OnPostRender: __customPostRender,
  });
});

Does anyone know how to do this without having to manage the "default fields" in a personalized way?

thank you,

How to start rendering before the end of the JSON response

I am developing a news app using React Native, and the problem is that the startup time is taking too long (about 3 seconds to load initially and 5 seconds for the home screen appears).
I have 36 articles and the json response is large (3000 lines). After analysis, I found that the reason is the waiting time to resolve the function promise that gets the items (after .then).

The function that obtains articles:

export async function getArticles(){

    try {
        let articles = await fetch(``);
        let result = await articles.json();
        return result;

    } catch (error) {
        throw error
    }
}

The home.js which displays the articles is:

export class HomeScreen extends PureComponent {
    constructor(props) {
        super(props);
        this.state = {
            isLoading: true,
            data: null,
            isError: false,
        }
    }
    componentDidMount() {
        getArticles().then(data => {
            this.setState({
                isLoading: false,
                data: data
            })
        }, error => {
            Alert.alert("Error", "Something happend, please try again")
        })
    }

I tried to simplify the body of the response by extracting specific fields but there was no difference, and I don't know how to retrieve only a single number of articles from the article. url itself as per_page or something else ….
Can you advise me on how to start rendering before waiting for the end of the json response ?.

catalina – Can I get WebGL in chrome for rendering with AMD rather than with the Intel graphics card?

When I go to https://kepler.gl/demo/ukcommute, it is extremely slow on my Macbook Pro which has a built in Radeon Pro.

WebGL appears to be rendered using the Intel graphics card. Is there a way to force it to use the AMD one? I have hardware acceleration enabled in Chrome, and other things use the AMD card very well.

enter description of image here

architecture – Direct rendering and separation of shader logic

I am currently playing with writing a rendering engine and implementing a forward rendering pipeline.

I have little doubt about how things should be implemented with regard to rendering passes as well as rendering multiple lights.

So I wonder what is the best approach here:

  1. Dynamically generated Uber shading to adjust the number of lights currently affecting the active scene.
    This has the advantages of a single rendering pass for the light shader, so less binding occurs, which is an advantage.
  2. have separate lighting shading for each type of light and have a rendering pass for each mesh with each type of light, combining in the end the result of (possibly 2 rendering targets that we use between ping pong) with an additive mixture. This has the good benefits of separation and more modular and maintainable shader code, but on the other hand, there are more rendering passes and more binding going on, not to mention the additive mixing ping pong between the rendering targets.
  3. Other ideas that I have not heard / thought and that I would like to hear

Of course, there is always room for the optimization of the two techniques such as the occlusion test, test which point / spot affects which mesh (relevant for the second technique).

So I wonder what is the take on the modern rendering / game engine and maybe I haven't listed their approach and I would love to hear about it.

Also, if you are on the side of one technique or another, you would like to have your opinion on the pros and cons.

Thanks in advance!

bokeh – Why does the electronic shutter of the first curtain affect the rendering of blurred backgrounds?

The reason for this is that at large apertures under good lighting requiring a fast shutter speed, most of the exposure comes from a very narrow displacement slot between the first and second curtain.

The curtains before and after the slit are however not at the same height.

The electronic curtain moves where the photosites are.

The mechanical curtain moves at a significant distance from the sensor. In particular, there is an anti-aliasing filter and perhaps protective glass on top of the sensor, and only then the curtain of the shutter.

Because the curtains are at different heights from the sensor, the parallax effect becomes significant. The faster the shutter speed, the smaller the travel slot and the problem becomes more pronounced.

To solve the problem, the shutter speed should be slowed down. Firstly, if the camera is not already at its lowest ISO setting, the ISO should be lowered to the lowest ISO. Second, if the shutter speed is still too fast and the effect appears, an ND filter can be used.

However, the best solution is to use shutter curtains that are at the same height of the photosites. So if the second curtain is mechanical, the first should also be mechanical. Unfortunately, with all cameras, this is not possible.

2d – Rendering of entities from a finite world, that it appears infinite [Java/LibGDX]

I'm programming a 2D side scrolling game. I currently have a finite world, made up of blocks stored in an array. And a song table containing a list of entities for that song. For now, I only had limits attached to the camera that prevent the camera from overwriting outside the limits of the world, so a normal finished world rendering. But I would like it to be infinite, given the blocks that work rather well. I divide the camera limits by the size of a block and I have the array clues that I need for rendering.

For simplicity, I just post an example with the block centered at the camera position (and also only with the x Coord since y will be the same each time):

        blockX = camera.position.x / BLOCK_SIZE;
        blockX = (world.width + blockX) % world.width;

The seconds step ensures that I always keep the value within the limits of the world.
So if the camera goes into the negative, I will teleport it to the other side of the world at some point. But until then, the following events happen:

Consider a world width of 100 (world with = number of blocks; the world table goes from 0 to 99 (inclusive)) and a block of 10:

        blockX = -10 / 10;           => blockX = -1;
        blockX = (100 + (-1)) % 100; => blockX = 99;
        int blockID = world.blocks(blockX);
        renderBlock(blockID, blockX * BLOCK_SIZE)

I receive the information from the block of block -1 which is block 99, because the world should be infinite, but I return it to -10 because the camera is not currently positioned at the other end of the world .

For the pieces, I have a similar system, I make the pieces essentially the same as the blocks. So, for block x = -1, I make block 9 (block size = 10 blocks). But if, for example, I return the entities in block 9 while the camera is still at x = -5, all the entities are returned to their normal world position at around 90 to 100. But they should be returned at -10 to 0. My question now is, how would you properly relate the current position of the songs to the position of the rendered entity?