[PySpark] RDD programming on a large file

重難點

1、parallelize 方法

通常來講,Spark會嘗試根據集羣的情況,來自動設定slices的數目。然而,你也能夠經過傳遞給parallelize的第二個參數來進行手動設置。javascript

data_reduce = sc.parallelize([1, 2, .5, .1, 5, .2], 1)
works = data_reduce.reduce(lambda x, y: x / y)

10.0

data_reduce = sc.parallelize([1, 2, .5, .1, 5, .2], 3)
data_reduce.reduce(lambda x, y: x / y)

0.004

 

 

/* conitnue */css

 

 

 

Test 1, process large file

 

Init.

In [1]:
from pyspark import SparkConf, SparkContext
import datetime
In [2]:
def fnGetAppName():

    currentSecond=datetime.datetime.now().second
    currentMinute=datetime.datetime.now().minute
    currentHour=datetime.datetime.now().hour

    currentDay=datetime.datetime.now().day
    currentMonth=datetime.datetime.now().month
    currentYear=datetime.datetime.now().year
    
    return "{}-{}-{}_{}-{}-{}".format(currentYear, currentMonth, currentDay, currentHour, currentMinute, currentSecond)
In [3]:
appName = fnGetAppName()
print("appName: {}".format(appName))
conf = SparkConf().setMaster("spark://node-master:7077").setAppName(appName)
sc = SparkContext(conf = conf)
 
appName: 2019-11-3_20-41-11
In [4]:
logFile = "/dataset/VS14MORT.DUSMCPUB"
data_from_file = sc.textFile(logFile, 2).cache()
In [5]:
def fn_timer(a_func):

    def wrapTheFunction():
        import time
        time_start=time.time()
        
        a_func()
        
        time_end=time.time()
        print('totally cost {} sec'.format(time_end-time_start))
 
    return wrapTheFunction
 

Preview.

In [6]:
data_from_file.take(2)
Out[6]:
['                   1                                          2101  M1087 432311  4M4                2014U7CN                                    I64 238 070   24 0111I64                                                                                                                                                                           01 I64                                                                                                  01  11                                 100 601',
 '                   1                                          2101  M1058 371708  4D3                2014U7CN                                    I250214 062   21 0311I250 61I272 62E669                                                                                                                                                            03 I250 E669 I272                                                                                       01  11                                 100 601']
 

Extract riogrously.

In [7]:
def extractInformation(row):
    import re
    import numpy as np

    selected_indices = [
         2,4,5,6,7,9,10,11,12,13,14,15,16,17,18,
         19,21,22,23,24,25,27,28,29,30,32,33,34,
         36,37,38,39,40,41,42,43,44,45,46,47,48,
         49,50,51,52,53,54,55,56,58,60,61,62,63,
         64,65,66,67,68,69,70,71,72,73,74,75,76,
         77,78,79,81,82,83,84,85,87,89
    ]

    '''
        Input record schema
        schema: n-m (o) -- xxx
            n - position from
            m - position to
            o - number of characters
            xxx - description
        1. 1-19 (19) -- reserved positions
        2. 20 (1) -- resident status
        3. 21-60 (40) -- reserved positions
        4. 61-62 (2) -- education code (1989 revision)
        5. 63 (1) -- education code (2003 revision)
        6. 64 (1) -- education reporting flag
        7. 65-66 (2) -- month of death
        8. 67-68 (2) -- reserved positions
        9. 69 (1) -- sex
        10. 70 (1) -- age: 1-years, 2-months, 4-days, 5-hours, 6-minutes, 9-not stated
        11. 71-73 (3) -- number of units (years, months etc)
        12. 74 (1) -- age substitution flag (if the age reported in positions 70-74 is calculated using dates of birth and death)
        13. 75-76 (2) -- age recoded into 52 categories
        14. 77-78 (2) -- age recoded into 27 categories
        15. 79-80 (2) -- age recoded into 12 categories
        16. 81-82 (2) -- infant age recoded into 22 categories
        17. 83 (1) -- place of death
        18. 84 (1) -- marital status
        19. 85 (1) -- day of the week of death
        20. 86-101 (16) -- reserved positions
        21. 102-105 (4) -- current year
        22. 106 (1) -- injury at work
        23. 107 (1) -- manner of death
        24. 108 (1) -- manner of disposition
        25. 109 (1) -- autopsy
        26. 110-143 (34) -- reserved positions
        27. 144 (1) -- activity code
        28. 145 (1) -- place of injury
        29. 146-149 (4) -- ICD code
        30. 150-152 (3) -- 358 cause recode
        31. 153 (1) -- reserved position
        32. 154-156 (3) -- 113 cause recode
        33. 157-159 (3) -- 130 infant cause recode
        34. 160-161 (2) -- 39 cause recode
        35. 162 (1) -- reserved position
        36. 163-164 (2) -- number of entity-axis conditions
        37-56. 165-304 (140) -- list of up to 20 conditions
        57. 305-340 (36) -- reserved positions
        58. 341-342 (2) -- number of record axis conditions
        59. 343 (1) -- reserved position
        60-79. 344-443 (100) -- record axis conditions
        80. 444 (1) -- reserve position
        81. 445-446 (2) -- race
        82. 447 (1) -- bridged race flag
        83. 448 (1) -- race imputation flag
        84. 449 (1) -- race recode (3 categories)
        85. 450 (1) -- race recode (5 categories)
        86. 461-483 (33) -- reserved positions
        87. 484-486 (3) -- Hispanic origin
        88. 487 (1) -- reserved
        89. 488 (1) -- Hispanic origin/race recode
     '''

    record_split = re\
        .compile(
            r'([\s]{19})([0-9]{1})([\s]{40})([0-9\s]{2})([0-9\s]{1})([0-9]{1})([0-9]{2})' + 
            r'([\s]{2})([FM]{1})([0-9]{1})([0-9]{3})([0-9\s]{1})([0-9]{2})([0-9]{2})' + 
            r'([0-9]{2})([0-9\s]{2})([0-9]{1})([SMWDU]{1})([0-9]{1})([\s]{16})([0-9]{4})' +
            r'([YNU]{1})([0-9\s]{1})([BCOU]{1})([YNU]{1})([\s]{34})([0-9\s]{1})([0-9\s]{1})' +
            r'([A-Z0-9\s]{4})([0-9]{3})([\s]{1})([0-9\s]{3})([0-9\s]{3})([0-9\s]{2})([\s]{1})' + 
            r'([0-9\s]{2})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})' + 
            r'([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})' + 
            r'([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})' + 
            r'([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})' + 
            r'([A-Z0-9\s]{7})([\s]{36})([A-Z0-9\s]{2})([\s]{1})([A-Z0-9\s]{5})([A-Z0-9\s]{5})' + 
            r'([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})' + 
            r'([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})' + 
            r'([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})' + 
            r'([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})([\s]{1})([0-9\s]{2})([0-9\s]{1})' + 
            r'([0-9\s]{1})([0-9\s]{1})([0-9\s]{1})([\s]{33})([0-9\s]{3})([0-9\s]{1})([0-9\s]{1})')
    try:
        rs = np.array(record_split.split(row))[selected_indices]
    except:
        rs = np.array(['-99'] * len(selected_indices))
    return rs
#     return record_split.split(row)
In [8]:
data_from_file_conv = data_from_file.map(extractInformation)
data_from_file_conv
Out[8]:
PythonRDD[3] at RDD at PythonRDD.scala:53
In [9]:
# data_from_file_conv.map(lambda row: row).take(1)
data_from_file_conv.take(1)
Out[9]:
[array(['1', '  ', '2', '1', '01', 'M', '1', '087', ' ', '43', '23', '11',
        '  ', '4', 'M', '4', '2014', 'U', '7', 'C', 'N', ' ', ' ', 'I64 ',
        '238', '070', '   ', '24', '01', '11I64  ', '       ', '       ',
        '       ', '       ', '       ', '       ', '       ', '       ',
        '       ', '       ', '       ', '       ', '       ', '       ',
        '       ', '       ', '       ', '       ', '       ', '01',
        'I64  ', '     ', '     ', '     ', '     ', '     ', '     ',
        '     ', '     ', '     ', '     ', '     ', '     ', '     ',
        '     ', '     ', '     ', '     ', '     ', '     ', '01', ' ',
        ' ', '1', '1', '100', '6'], dtype='<U40')]
 

RDD Programming

 

Transformations

 

.map(...)

 

The method is applied to each element of the RDD: in the case for the data_from_file_conv dataset you can think of this as a transformation of each row.html5

In [10]:
data_2014 = data_from_file_conv.map(lambda row: int(row[16]))
data_2014.take(10)
Out[10]:
[2014, 2014, 2014, 2014, 2014, 2014, 2014, 2014, 2014, -99]
 

You can combine more columns.java

In [11]:
data_2014_2 = data_from_file_conv.map(lambda row: (row[16], int(row[16])))
data_2014_2.take(10)
Out[11]:
[('2014', 2014),
 ('2014', 2014),
 ('2014', 2014),
 ('2014', 2014),
 ('2014', 2014),
 ('2014', 2014),
 ('2014', 2014),
 ('2014', 2014),
 ('2014', 2014),
 ('-99', -99)]
 

.filter(...)

 

The .filter(...) method allows you to select elements of your dataset that fit specified criteria.node

In [12]:
data_filtered = data_from_file_conv.filter(lambda row: row[5] == 'F' and row[21] == '0')
data_filtered.count()
Out[12]:
6
 

.sample(...)

 

The .sample() method returns a randomized sample from the dataset.python

In [13]:
fraction = 0.1
data_sample = data_from_file_conv.sample(False, fraction, 666)
#data_sample.take(1)
 

Let's confirm that we really got 10% of all the records.jquery

In [14]:
print('Original dataset: {0}, sample: {1}'.format(data_from_file_conv.count(), data_sample.count()))
 
Original dataset: 2631171, sample: 262645
 

.flatMap(...)

 

The .flatMap(...) method works similarly to .map(...) but returns a flattened results instead of a list.linux

In [15]:
data_2014_flat = data_from_file_conv.flatMap(lambda row: (row[16], int(row[16]) + 1))
data_2014_flat.take(10)
Out[15]:
['2014', 2015, '2014', 2015, '2014', 2015, '2014', 2015, '2014', 2015]
 

.distinct()

 

This method returns a list of distinct values in a specified column.android

In [16]:
distinct_gender = data_from_file_conv.map(lambda row: row[5]).distinct().collect()
distinct_gender
Out[16]:
['F', '-99', 'M']
 

.leftOuterJoin(...)

 

Left outer join, just like the SQL world, joins two RDDs based on the values found in both datasets, and returns records from the left RDD with records from the right one appended where the two RDDs match.

In [17]:
rdd1 = sc.parallelize([('a', 1), ('b', 4), ('c',10)])
rdd2 = sc.parallelize([('a', 4), ('a', 1), ('b', '6'), ('d', 15)])

rdd3 = rdd1.leftOuterJoin(rdd2)
rdd3.take(5)
Out[17]:
[('b', (4, '6')), ('a', (1, 1)), ('a', (1, 4)), ('c', (10, None))]
 

If we used .join(...) method instead we would have gotten only the values for 'a' and 'b' as these two values intersect between these two RDDs.

In [18]:
rdd4 = rdd1.join(rdd2)
rdd4.collect()
Out[18]:
[('b', (4, '6')), ('a', (1, 4)), ('a', (1, 1))]
 

Another useful method is the .intersection(...) that returns the records that are equal in both RDDs.

In [19]:
rdd5 = rdd1.intersection(rdd2)
rdd5.collect()
Out[19]:
[('a', 1)]
 

.repartition(...)

 

Repartitioning the dataset changes the number of partitions the dataset is divided into.

In [20]:
rdd1 = rdd1.repartition(4)

len(rdd1.glom().collect())
Out[20]:
4
 

Actions

 

.take(...)

 

The method returns n top rows from a single data partition.

In [21]:
data_first = data_from_file_conv.take(1)
data_first
Out[21]:
[array(['1', '  ', '2', '1', '01', 'M', '1', '087', ' ', '43', '23', '11',
        '  ', '4', 'M', '4', '2014', 'U', '7', 'C', 'N', ' ', ' ', 'I64 ',
        '238', '070', '   ', '24', '01', '11I64  ', '       ', '       ',
        '       ', '       ', '       ', '       ', '       ', '       ',
        '       ', '       ', '       ', '       ', '       ', '       ',
        '       ', '       ', '       ', '       ', '       ', '01',
        'I64  ', '     ', '     ', '     ', '     ', '     ', '     ',
        '     ', '     ', '     ', '     ', '     ', '     ', '     ',
        '     ', '     ', '     ', '     ', '     ', '     ', '01', ' ',
        ' ', '1', '1', '100', '6'], dtype='<U40')]
 

If you want somewhat randomized records you can use .takeSample(...) instead.

In [22]:
data_take_sampled = data_from_file_conv.takeSample(False, 1, 667)
data_take_sampled
Out[22]:
[array(['2', '17', ' ', '0', '08', 'M', '1', '069', ' ', '39', '19', '09',
        '  ', '1', 'M', '7', '2014', 'U', '7', 'U', 'N', ' ', ' ', 'I251',
        '215', '063', '   ', '21', '06', '11I500 ', '21I251 ', '61I499 ',
        '62I10  ', '63N189 ', '64K761 ', '       ', '       ', '       ',
        '       ', '       ', '       ', '       ', '       ', '       ',
        '       ', '       ', '       ', '       ', '       ', '05',
        &##39;I251 ', 'I120 ', 'I499 ', 'I500 ', 'K761 ', '     ', '     ',
        '     ', '     ', '     ', '     ', '     ', '     ', '     ',
        '     ', '     ', '     ', '     ', '     ', '     ', '01', ' ',
        ' ', '1', '1', '100', '6'], dtype='<U40')]
 

.reduce(...)

 

Another action that processes your data, the .reduce(...) method reduces the elements of an RDD using a specified method.

In [23]:
rdd1.map(lambda row: row[1]).reduce(lambda x, y: x + y)
Out[23]:
15
 

If the reducing function is not associative and commutative you will sometimes get wrong results depending how your data is partitioned.

In [24]:
data_reduce = sc.parallelize([1, 2, .5, .1, 5, .2], 1)
 

I we were to reduce the data in a manner that we would like to divide the current result by the subsequent one, we would expect a value of 10

In [25]:
works = data_reduce.reduce(lambda x, y: x / y)
works
Out[25]:
10.0
 

However, if you were to partition the data into 3 partitions, the result will be wrong.

In [26]:
data_reduce = sc.parallelize([1, 2, .5, .1, 5, .2], 3)
data_reduce.reduce(lambda x, y: x / y)
Out[26]:
0.004
 

The .reduceByKey(...) method works in a similar way to the .reduce(...) method but performs a reduction on a key-by-key basis.

In [27]:
data_key = sc.parallelize([('a', 4),('b', 3),('c', 2),('a', 8),('d', 2),('b', 1),('d', 3)],4)
data_key.reduceByKey(lambda x, y: x + y).collect()
Out[27]:
[('b', 4), ('c', 2), ('a', 12), ('d', 5)]
 

.count()

 

The .count() method counts the number of elements in the RDD.

In [28]:
data_reduce.count()
Out[28]:
6
 

It has the same effect as the method below but does not require shifting the data to the driver.

In [29]:
len(data_reduce.collect()) # WRONG -- DON'T DO THIS!
Out[29]:
6
 

If your dataset is in a form of a key-value you can use the .countByKey() method to get the counts of distinct keys.

In [30]:
data_key.countByKey().items()
Out[30]:
dict_items([('a', 2), ('b', 2), ('c', 1), ('d', 2)])
 

.saveAsTextFile(...)

 

As the name suggests, the .saveAsTextFile() the RDD and saves it to text files: each partition to a separate file.

In [31]:
data_key.saveAsTextFile('/Users/drabast/Documents/PySpark_Data/data_key.txt')
 

To read it back, you need to parse it back as, as before, all the rows are treated as strings.

In [32]:
def parseInput(row):
    import re
    
    pattern = re.compile(r'\(\'([a-z])\', ([0-9])\)')
    row_split = pattern.split(row)
    
    return (row_split[1], int(row_split[2]))
    
data_key_reread = sc \
    .textFile('/Users/drabast/Documents/PySpark_Data/data_key.txt') \
    .map(parseInput)
data_key_reread.collect()
Out[32]:
[('a', 4), ('b', 3), ('c', 2), ('a', 8), ('d', 2), ('b', 1), ('d', 3)]
 

.foreach(...)

 

A method that applies the same function to each element of the RDD in an iterative way.

In [33]:
def f(x): 
    print(x)

data_key.foreach(f)
相關文章
相關標籤/搜索