In all cases, assume my_function is much more complicated than x*2. The purpose here is to demonstrate the technique; in all cases, this is the wrong way to handle such a trivial problem.
Using NumPy
Use np.vectorize, e.g.:
import numpy as np
# Define an arbitrary function
def my_function(x):
return x*2
# Vectorize the function
vectorized_function = np.vectorize(my_function)
# Apply the vectorized function to a NumPy array
arr = np.array([1, 2, 3, 4])
result = vectorized_function(arr)
# Result: [2, 4, 6, 8]Using Pandas
Use df.map (called df.applymap before v2.1.0), e.g.:
import pandas as pd
df = pd.DataFrame(data={"x": [1, 2, 3, 4]})
df = df.map(lambda x: x*2)
# df["x"] is now [2, 4, 6, 8]Using sklearn
Use FunctionTransformer, e.g.:
from sklearn.preprocessing import FunctionTransformer
import numpy as np
t = FunctionTransformer(lambda x:x*2)
arr = np.array([1, 2, 3, 4])
result = t.fit_transform(arr) # can also just use t.transform
# Result: [2, 4, 6, 8]Using PyTorch:
You can just apply the function directly and PyTorch will broadcast it. Note that if the function is capable of behaving differently for tensor data, it will do that, so be careful.
import torch
# Define an arbitrary function
def my_function(x):
return x*2
# Apply the vectorized function to a NumPy array
arr = torch.IntTensor([1, 2, 3, 4])
result = my_function(arr)