magento2 – Migrating code & data from 1.9 to 2

Our site is currently running in PHP 5.4 with Magento 1.9. I want to migrate the site to the latest version of Magento along with PHP 7.

We have created & used a lot of custom modules & using a custom theme.

What is the best & easiest way to migrate the code & data?

Also, I’ve come across official “code migration” tool, but it is leading to 404 page. Is this link outdated? Is there any new document available for code migration?

Thanks in advance.

Enable specific CSS Code for Visitors and specific Roles

how i can enable an specific css code for visitors and an specific user role

for example thsi code:

input[type=radio] {
border: 1px solid !important;

machine learning – How code of Voice Assistants are structured?

I am curious how voice assistants like Siri, Alexa, Google, Alisa and others…
how they process commands. Is it lots of regexes?
I’ve experimented with mobile app that analyzes the text of what user said with regex pattern. And if it has a match with regex, it executes an action.
For ex, commands like: “30 minutes to workout”, “50 minutes for homework session”.
I thought of how the user will ask for timer and these are the types of sentences I came up with.

I’ve heard of Natural Language Processing, but really don’t understand what it is.

Is it all just bunches of regexes? So how they can be structured?

Just how the documentation file is led up if all the features are human handpicked; reading it from code seems unreal, so I guess there should be some great documentations/ structure document aside?
How these voice assistants are organized? (from the stand point of text analysis)

Or is it pre-learned machine learning model file?

Pls, correct me if the question is formulated not so well and write how to improve.

Is it really good practice in Python code for machine learning to use so many parameters?

Currently I am a student learning Machine Learning, and so my observation is from an academic context. It may be different in a business environment.

One thing I find very odd when I see Python code for machine learning is that when you call a typical network class, you send it lots of parameters, which strikes me as a risky thing to do.
Surely it would be better practice to put all of your hyperparameters and configuration values in a special class for hyperparameters and configuration, and then when you want to change the values they are easy to find?

If the class is in a seperate file, you can simply copy it across for the next project.

Would anyone want to comment on why this is not the obvious thing to do?

Python code that has some divergences in regards to logger, printing statistics, writing of files, file paths and docstring

I think I have corrected most of the divergences but I am still concerned about docstring especially. For example, Docstrings should always be placed directly under the function specification, within its body. Have I used docstrings in an appropriate way or? If you still can find any errors regarding file paths, logger, printing statitics, writing of files, my comments and docstring, please let me know.

Code & Output

All file paths needs to be based upon variable RESOURCES and in order of providing platform independence all path’s should be constructed using pathlib. I need to change value of RESOURCES:

RESOURCES = Path(__file__).parent / '../_Resources/'


Name of logger needs to be ass_3_logger. Check

Printing Statistics

Duration headers need to be aligned with duration values, in a table column fashion, and name of Fibonacci approaches needs to be formatted in conformity to specified requirements:

              DURATION FOR EACH APPROACH WITHIN INTERVAL: 30-0              
                       Seconds   Milliseconds   Microseconds    Nanoseconds
Fib Iteration          0.00047        0.47289          472.9         472892
Fib Recursion          0.74339      743.39246       743392.5      743392458
Fib Memory             0.00052        0.52040          520.4         520404

I think I have corrected this divergence, not a 100% sure though.

Writing Files

Works as intended and mostly meets stated requirements. However, the name of produced files does not conform to stated requirements.

The implementation would perhaps become cleaner using zip() which lets us traverse two containers at the same time:

with open(file_path, 'w') as file:
    for idx, value in zip(range(len(details(1)) - 1, -1, -1), details(1)):
        file.write("{}: {}n".format(idx, value))


  • Attend divergence in regards to logger!
  • Attend divergences in regards to printing statistics!
  • Attend divergence in regards to writing of files!
  • Make sure all file paths are constructed properly using
  • I have to add more comments which describes my implementations, and make sure
    I also place docstrings within the body of respective function!

Complete solution:

#!/usr/bin/env python

Below you find the inherent code, some of which fully defined. You add implementation
for those functions which are needed:

 - create_logger()
 - measurements_decorator(..)
 - fibonacci_memory(..)
 - print_statistics(..)
 - write_to_file(..)

from pathlib import Path
from timeit import default_timer as timer
from functools import wraps
import argparse
import logging
import logging.config
import json
import codecs
import time

__version__ = '1.0'
__desc__ = "Program used for measuring execution time of various Fibonacci implementations!"

LINE = 'n' + ("---------------" * 5)
RESOURCES = Path.cwd() / "../_Resources/"
LOGGER = None  # declared at module level, will be defined from main()

def create_logger() -> logging.Logger:
    """Create and return logger object.
    Purpose: This method creates object for logger and return the object
    :param : None
    :return : Logger object."""

    logger = logging.getLogger()

    # Load the configuration.
    config_file = str(RESOURCES) + "/ass3_log_conf.json"
    with, "r", encoding="utf-8") as fd:
        config = json.load(fd)

    # Set up proper logging. This one disables the previously configured loggers.
    logger = logging.getLogger('ass_3_logger')
    return logger

def measurements_decorator(func):
    """Function decorator, used for time measurements.
    Purpose: This is a decorator which is used for measurement
    of the functions execution and printing logs
    :param : func
    :return : tuple(float,dictionary)."""

    def wrapper(nth_nmb: int) -> tuple:
        result = {}
        k = 5
        ts = time.time()"Starting measurements...")
        for i in reversed(range(nth_nmb + 1)):
            result(i) = func(i)
            if k == 5:
                LOGGER.debug(str(i) + " : " + str(result(i)))
                k = 0
            k += 1
        te = time.time()
        return (te - ts), result

    return wrapper

def fibonacci_iterative(nth_nmb: int) -> int:
    """An iterative approach to find Fibonacci sequence value.

    """Purpose: This is function to calculate fibonacci series using iterative approach
    :param : int
    :return : int."""

    old, new = 0, 1
    if nth_nmb in (0, 1):
        return nth_nmb
    for __ in range(nth_nmb - 1):
        old, new = new, old + new
    return new

def fibonacci_recursive(nth_nmb: int) -> int:
    """An recursive approach to find Fibonacci sequence value.
    """Purpose: This is function to calculate fibonacci recursion using iterative approach 
    :param : int
    :return : int."""

    def fib(_n):
        return _n if _n <= 1 else fib(_n - 1) + fib(_n - 2)

    return fib(nth_nmb)

def fibonacci_memory(nth_nmb: int) -> int:
    """An recursive approach to find Fibonacci sequence value, storing those already calculated.
    Purpose: This is function to calculate fibonacci series using memory approach
    :param : int
    :return : int."""

    memory_dict = {0: 0, 1: 1}

    def fib(_n):
        if _n not in memory_dict:
            memory_dict(_n) = fib(_n - 1) + fib(_n - 2)
        return memory_dict(_n)

    return fib(nth_nmb)

def duration_format(duration: float, precision: str) -> str:
    """Function to convert number into string. Switcher is dictionary type here.
        Purpose: This is a function to proper format for duration
        :param: float, str
        :return : str."""
    switcher = {
        'Seconds': "{:.5f}".format(duration),
        'Milliseconds': "{:.5f}".format(duration * 1_000),
        'Microseconds': "{:.1f}".format(duration * 1_000_000),
        'Nanoseconds': "{:d}".format(int(duration * 1_000_000_000))

    # get() method of dictionary data type returns value of passed argument if it is present in
    # dictionary otherwise second argument will be assigned as default value of passed argument
    return switcher.get(precision, "nothing")

# purpose of this function is to display the statics
def print_statistics(fib_details: dict, nth_value: int):
    """Function which handles printing to console."""
    print("nt  DURATION FOR EACH APPROACH WITHIN INTERVAL: " + str(nth_value) + "-0")
    print("{0}tttt {1:<7}t {2:<7}t {3:<7}t {4:<7}".format("", "Seconds", "Milliseconds", "Microseconds",
    for function in fib_details:
        print("{0}t {1:<7}t {2:<13}t {3:<14}t {4:<7}".format(function,
                                                               duration_format(fib_details(function)(0), "Seconds"),

# purpose of this function is to write results into file.
def write_to_file(fib_details: dict):
    """Function to write information to file."""
    for function in fib_details:
        with open(str(RESOURCES) + "//" + function + ".txt", "w") as file:
            for idx, value in zip(range(len(fib_details(function)(1)) - 1, -1, -1), fib_details(function)(1).values()):
                file.write("{}: {}n".format(idx, value))

def main():
    """The main program execution. YOU MAY NOT MODIFY ANYTHING IN THIS FUNCTION!!"""
    epilog = "DT179G Assignment 3 v" + __version__
    parser = argparse.ArgumentParser(description=__desc__, epilog=epilog, add_help=True)
    parser.add_argument('nth', metavar='nth', type=int, nargs='?', default=30,
                        help="nth Fibonacci sequence to find.")

    global LOGGER  # ignore warnings raised from linters, such as PyLint!
    LOGGER = create_logger()

    args = parser.parse_args()
    nth_value = args.nth  # nth value to sequence. Will fallback on default value!

    fib_details = {  # store measurement information in a dictionary
        'fib iteration': fibonacci_iterative(nth_value),
        'fib recursion': fibonacci_recursive(nth_value),
        'fib memory   ': fibonacci_memory(nth_value)

    print_statistics(fib_details, nth_value)  # print information in console
    write_to_file(fib_details)  # write data files

if __name__ == "__main__":

python – Building two packages with different requirements from same source code using conda-build

I am working on a project that uses Tensorflow. The requirement is to package my code as conda package using conda-build.

Tesnorflow is yet to have one package on conda that supports both cpu and gpu see this question. Instead Tensorflow on conda is two packages one for CPU tensorflow and GPU tensorflow-gpu

This will force me to build two packages for my project for CPU and GPU. What is the neatest way to do that using conda-build without having to have two repos.

Is it possible to have multiple meta.yaml files to build from using conda-build?


Is there any code for disposble email block in the Magento 1.9

I am getting a lot of fakes registration and form submission on WordPress and Magento 1.9. Please suggest some ideas.

Thank you

google sheets – Possible to color code text based on JUST the text value and automatically?

I’ve been wondering about this for years… it would be such a life-changer.

Let’s say I have a table:

Spectrum Bills
XYZ Rent
ABC Food/Beverage
Spectrum Bills

is there any way to do conditional formatting on the 2nd column such that:

  1. Once a text value is entered, it gets some default color
  2. If the text is “seen again” in that column, it also gets that same color

e.g. in this example, Bills could be red, rent yellow, food/beverage green, and then Bills on row FOUR becomes red because Bills is already an entry

I know you can do this manually via conditional formatting and creating 1 rule per category, but I do so many sheets with so many different purposes and the text values are rarely the sameā€¦ and doing the conditional formatting, choosing the damned color every single time, over and over… and then of course you have a new category or text field LATER, and that one has no rule associated with it, so back in we go…

I’ve googled for this so many times and have never seen any solution/add-on, but man, I’d think this would be so useful to people. If there’s a tool you will have made my day.

SharePoint 2019 Solutions with sandboxed code are disabled

I have a SharePoint 2019 server with a single server farm role. I have enabled the sandbox code service following these steps: enable-sandbox-solutions-on-sharepoint-2016
When I upload a sandbox solution containing a workflow the activate button is enabled. When I click activate I get the ULS error: “Solutions with sandboxed code are disabled”
And on screen “Activation of solutions with sandboxed code has been disabled”

any ideas?

When I create a new Sandbox solution in Visual Studio and upload this to my site I can activate the solution.

Are workflows no longer allowed in Sandbox solutions?

performance tuning – Speeding up code doing random sampling

I have a particular probability distribution (that is a function of a parameter $phi$), that I would like to sample from and then do a listplot for that function.

Here is the code that I came up with:

quadratures = 
    Abs(Sum(((Alpha) E^(I (Phi)))^
        n/(Sqrt)(n!) 1/
         Sqrt(2^n n!) ((m (Omega))/((Pi) (HBar)))^(1/
           4) Exp(-((m (Omega) z^2)/(2 (HBar)))) HermiteH(n, 
         Sqrt((m (Omega))/(HBar)) z), {n, 0, 
        8}))^2, {z, -(Infinity), (Infinity)}) /. {(Alpha) -> 1};

points = 100;
(Phi)list = Subdivide(2 (Pi), points);
data = RandomVariate(quadratures /. (Phi) -> #) & /@ (Phi)list;

It seems to work but it’s very slow. Origionally the code had a few functions-in-functions, and I tried putting everything in a compact form in hopes that it would speed-up, but it’s still too slow.

Ideally I’d like to sample something like 1000 points, while right now it takes a very long time to do even 100 samplings.