Will antifeminism become very soon a hate speech likely to be imprisoned?

Theoretically, eventually. But probably not.

This is a bit of a reasonable assumption to judge by the current social climate. Observe social media; anything that defies or does not directly strike the feminist crowd is considered a blockable / banable offense, and it amazes people on the left and right. It's ridiculous. Even Reddit is generally authoritarian in this respect.

[Edit] In addition, universities have become unstable dummies that perpetuate the cultural climate and being anti-feminist is equivalent to being the Führer. [Edit]

The salvation is that I do not think men will give much more ground. People in general are really fed up with the petulant feminist leftist dogma, especially men. It's gone out of control and I feel that there is a fair storm that is silently preparing in response. Especially after Gillette's little waterfall.

.

[REQ] Ivona Reader Voice and Speech Synthesis Program 2 – Download and Installation Instructions

Hello,

Could anyone share this software, please?

Should I hit a liberal in the mouth when they start to spew a hate speech against America?

Whoa! It was not so long ago, they were mostly conservative trolls who talked about their right to hate speeches! So do not even bring a political party in this matter.

One thing these trolls have consistently mentioned, which was actually a good thing, is that hate speech is subjective. In other words, it is only hate speech that depends on the opinion of the one who hears it. So, before hitting someone in the mouth, you should make sure that the judge agrees that it is a hate speech. In this way, your sentence for assault can be a little lighter.

.

linux – How to read audio (or speech) on Airplay nodes from a command line?

I'm trying to find a way to send audio to an Airplay node from the command line from any platform (OSX, Linux or Windows) . A thread here suggests that

say to?

OSX should include AirPlay nodes, but on my OSX system, I only see the local hardware output device. (Note that since the GUI, all AirPlay nodes appear from the volume controller, they are all available on iTunes on all platforms as well).

I've tried various options such as AirControl and Airplayer, but they all seem to be limited to mirroring the Apple TV rather than the audio output.

Can anyone suggest a solution to send arbitrary audio, or at least "say" OSX, to an arbitrary AirPlay node, on any platform?

Trump's speech in prime time on the government's closure

President Donald Trump will hold court in court tonight as he delivers the speech of the Oval Office of

hope to convince the American public and skeptical lawmakers that there is a crisis on the southern border. latest avant-garde news

His goal: to get billions of dollars of funds to build the "big, beautiful" wall he promised in a context of partial closure of the government that has no end.
Read more

.

How to spell points using Google Speech Recognition?

When using voice synthesis for the transcription of notes in Google Keep, I realized that every time I said "point", the punctuation mark was displayed. It's awesome 99% of the time, but sometimes I have to say a point, for example. "3rd period" or even "comma".

A relatively simple solution is to use the plural form "commas / points" and modify elements later, but I wonder if there is something in the settings that I missed that can be changed. Settings :: Language and Keyboard: ((Google voice input or speech synthesis output) are the two places I've viewed.

Free Speech: Looking for free and anonymous hosting on a website

As the title indicates, I am looking for anonymous and free web hosting to host my website with political content for freedom of expression.

Unlike almost every other country on the planet, content that I would like to publish freely is not possible in my country. No, it's not warez, child pornography or spam. In the United States, for example, this content is fully covered by the First Amendment to the US Constitution. As I live in my limiting country, they would come after me if they knew that it is me who creates the content, whatever service on this planet I would use.

As the payment is almost always traceable, it should be free.

Of course, I can perfectly advertise web hosting services on the page so that the web host can monetize my content only for itself.

The service I am requesting at this point is to host content available via a direct link, also called website address in the clearnet (which means not only via browser or Tor button or other dark web software) and for which I do not do it. have to pay in any currency. The creator must create or transfer my content via Tor to the website, ideally via a web client.

To answer in advance to the obvious questions:

Yes, I have been looking for such service for hours, days, weeks and months and many claim to offer it, but none really do. They simply lie in their advertising for you to start the process, but you want either money in a form or a real identity.

I do not want to use bitcoin or any other crypto currency, because I have to trust another group of people, like those in the bitcoin service or the coders of the crypto itself.

I would like to create my website only via Tor, so that I have to trust this service, nothing else. Of course, this is a calculated risk per se, and yes, I know that Tor is funded by the NSA (as I read somewhere) and the police have managed to find the trace of the criminals operating (anyway), but I am willing to accept. this one but no extra risk.

As my country is relatively small, I am pretty positive, the web hosting provider should suffer no consequences if it does not take into account the demands of the country where I live, who wanted to delete the content. But of course, he can always delete the content of my free speech if he thinks the assignment.

Does anyone know a free and anonymous hosting service?

Google Speech To Text receives the data in the wrong order.

I'm trying to use the Google Speech-To-Text API (through the @ google-cloud / speech npm package) to transcribe sound from a browser microphone (you must use multiple browsers to not use the Chrome API).

I create a streamingReconnaître with the necessary parameters and then I push some data to this stream but the response of the API is: "Malordered Data Received.Send exactly one configuration, followed by audio data".

Here is the code:

// create a customer
const gcpClient = new gcpSpeech.SpeechClient ();

const encoding = & # 39; LINEAR16 & # 39 ;;
const sampleRateHertz = 16000;
const languageCode = & # 39; en-US & # 39 ;;

request const = {
config: {
encoding: encoding,
sampleRateHertz: sampleRateHertz,
languageCode: languageCode,
}
interimResults: true
};

const RecognStream = gcpClient.streamingRecognize (request);

reconnaîtreStream
.on (data & # 39 ;, data) => {
console.log ("onDataThing", data);
process.stdout.write (
données.résultats[0] && data.results[0].alternatives[0]
                ? `Transcription: $ {data.results[0].alternatives[0].transcript}  n`
: ` n  nThe transcription time has elapsed, press Ctrl + C  n`.
)
})
.on (& # 39; error, error) => {
console.log ("error", error);
});

io.on (& # 39 ;, socket) => {
socket.on (& # 39; audiodata & # 39 ;, data) => {
console.log ("audiodata", data);
recognizeStream.write (data);
});
});

As you can see, I get the sound from the browser and send it through a socket (using socket.io) to the server for continuous recognition.

The configuration is sent when creating the feed with streamingReconnaître and then the audio is sent by data block, so I do not understand why is it considered misdirected.

If you have an idea or a solution, it would be great!

python – Classification for Speech Recognition

I've extracted features from the vocal sample dataset. Now, I want his classification using Deep Belief Network or any other classifier. For speaker identification, I need a Python code to do it. Thanks in advance

of sklearn import pre-processing
from dbn.tensorflow.models import UnsupervisedDBN
from the Sklearn import tree
from sklearn.decomposition import PCA

accessibility – Distinguish the aria-live speech from names accessible through the user interface

We have just launched our first user test of an accessible user interface prototype with blind and visually impaired testers. Despite my efforts to follow the various WCAG recommendations and specifications, there were major problems.

By far, the most important of them was the confusion that prevailed when the screen reader signaled the changes made to the aria-live regions and the contiguity of these spoken reports with the reading of the names accessible via the 39; user interface in response to key navigation.

I want to point out that our product is a simulator for first aid training. (Spoiler: Act fast or the patient dies).

It's a training product for a very large audience, but much more like a game than a web page, although we use the browser as an engine.

A live region may say "A paramedic has entered the room" or "The patient has opened his eyes". This occurs as an indirect (and delayed) response to the actions of the user.

Our testers looked for a relationship between these types of reports and their typing. I'm sure the real situation is confusing and in a hurry, but at least you can tell the difference between your hands and those of others.

The main problem (in my opinion) is that these two semantically distinct content sets have been read with exactly same voice synthesized, and with no gaps. The result was a cacophony. Button labels were read in a contiguous flow with reports of what was happening in the simulated world.

Such confusion does not occur in the sighted users of our product because the fictitious / dietary / simulated world is simply "different" from the graphical interface used to interact with it.

I am convinced that we can achieve understandable behavior of the UI, but I am quite puzzled as to how we could use the aria-live regions for updated content more than once per second without any experience falling into cacophony.

I used "polite" regions to air life, a parameter that promises (according to specifications) to allow some kind of graphical relationship between different types of content, rather than a babbling of word salads in competition .

Most of the discussions on aria-live seem to suppose a content of type "page" or "document". I followed their recommendations and the result was so disappointing that I am now looking for alternatives. There is a bit of a scene developing around "accessible games", but it seems to be mostly players, rather than developers. Discussions about techniques and implementations are almost as rare as rock dung.

I know that there is a (litigious) effort to get screen readers to support CSS3 Speech, so that different semantics can be "styled" with different voices.

This would be a very good (and standard!) Solution to our problem, but it seems that the "community" of screen readers (developers, engineers and users) considers it a low-priority feature or actively opposes it (for reasons that generally do not apply not in our case). There is certainly no implementation we can reasonably rely on.

The question I ask myself is this: how to design the UX of a relatively fast "game" type application, so that the live regions of the in-fiction world (diegetic) sound differently from the user interface?

I have some ideas.

  • Handle in-fiction / digetic content with our own accessible audio (eg, prerecorded MP3s) rather than relying on the pity of how the screen reader handles aria-live. (More audio is more?).

  • Prefix in-fiction / diegetic content with a distinct "beep" or other short sound effect.

  • Try to choreograph the changes made to the aria-live regions so that they interfere much less with the UI label reads. ("polite" on steroids).

  • Offer a special "training level" so that screen reader users can discover the user interface without the simultaneous urgency of saving the life of an imaginary patient.

Can any one tell if any of them are obvious ducks and perhaps suggest other areas of exploration?